Subscribe: DotNetGerman Bloggers
http://blogs.dotnetgerman.com/mainfeed.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: German
Tags:
angular  chat part  chat  core  die  github  microsoft  net core  net  new  part  project  react  running  und  version 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: DotNetGerman Bloggers

DotNetGerman Bloggers



Alle Blogs von DotNetGerman.com



Copyright: Copyright 2004-2014 DotNetGerman.com
 



OpenHack IoT & Data – 28. – 30. Mai

Fri, 20 Apr 2018 12:59:00 Z

Als CSE (Commercial Software Engineering) können wir Kunden beim Umsetzen von herausfordernden Cloud Projekten unterstützen. Ende Mai bieten wir als Readiness Maßnahme einen dreitägigen OpenHack zu IoT & Data an. Für alle Entwickler die sich mit diesen Themen beschäftigen eigentlich ein Muß. Wir geben bei den OpenHacks den Teilnehmern, also Euch, Aufgaben, die Ihr selber lösen müßt. So ergibt sich eine enorm hohe Lerneffizienz und ein bemerkenswerter Wissenstransfer. Zielteilnehmer: Alle die entwickeln können: Developer, Architekten, Data Scientists, … Falls ihr Euch also aktiv mit den Themen IoT & Data auseinandersetzt oder auseinander setzen wollt, dann kommt selber oder schickt andere Entwickler. Ihr könnt Euch dann gemeinsam mit den Software Engineers der CSE mal so richtig in das Thema reinknien. Ah ja: Ganz wichtig: Eigenen Laptop mitnehmen und “come prepare to Hack!!”. Kein Marketing, kein Sales. Hacking pur!!



Build 2018 Public Viewing with BBQ & Beer

Thu, 19 Apr 2018 17:03:44 Z

Die Microsoft Build Konferenz ist DIE Veranstaltung für alle Softwareentwickler die sich mit Microsoft Technologien befassen. Die Keynote gibt immer einen tollen Überblick über die neuesten Entwicklungen in den Bereichen .NET, Azure, Windows, Visual Studio, AI, IoT, Big Data und mehr. Zur Einstimmung auf die Keynote wird die Microsoft Developer User Group Graz dieses Jahr schon etwas früher mit BBQ und Bier starten.



IEC 61131-3: Der generische Datentyp T_Arg

Wed, 18 Apr 2018 21:56:00 Z

In dem Artikel The wonders of ANY zeigt Jakob Sagatowski wie der Datentyp ANY sinnvoll eingesetzt werden kann. Im beschriebenen Beispiel vergleicht eine Funktion zwei Variablen, ob der Datentyp, die Datenlänge und der Inhalt exakt gleich sind. Statt für jeden Datentyp eine eigene Funktion zu implementieren, kann mit dem Datentyp ANY die gleichen Anforderungen mit […]



Seamlessly Updating your Angular Libraries with the CLI, Schematics and ng update

Tue, 17 Apr 2018 00:00:00 +0100

Table of Contents This blog post is part of an article series. Part I: Generating Custom Code With The Angular CLI And Schematics Part II: Automatically Updating Angular Modules With Schematics And The CLI Part III: Extending Existing Code With The TypeScript Compiler APIPart IV: Frictionless Library Setup with the Angular CLI and Schematics Part V: Seamlessly Updating your Angular Libraries with ng update Thanks a lot to Hans Larsen from the Angular CLI team for reviewing this article. Updating libraries within your npm/yarn-based project can be a nightmare. Once you've dealt with all the peer dependencies, you have to make sure your source code doesn't run into breaking changes. The new command ng update provides a remedy: It goes trough all updated dependencies -- including the transitive ones -- and calls schematics to update the current project for them. Together with ng add described in my blog article here, it is the foundation for an eco system allowing a more frictionless package management. In this post, I'm showing how to make use of ng update within an existing library by extending the simple logger used in my article about ng add. If you want to look at the completed example, you find it in my GitHub repo. Schematics is currently an Angular Labs project. Its public API is experimental and can change in future. Introducing a Breaking Change To showcase ng update, I'm going to modify my logger library here. For this, I'm renaming the LoggerModule's forRoot method into configure: // logger.module.ts [...] @NgModule({ [...] }) export class LoggerModule { // Old: // static forRoot(config: LoggerConfig): ModuleWithProviders { // New: static configure(config: LoggerConfig): ModuleWithProviders { [...] } } As this is just an example, please see this change just as a proxy for all the other breaking changes one might introduce with a new version. Creating the Migration Schematic To adopt existing projects to my breaking change, I'm going to create a schematic for it. It will be placed into an new update folder within the library's schematics folder: This new folder gets an index.ts with a rule factory: import { Rule, SchematicContext, Tree } from '@angular-devkit/schematics'; export function update(options: any): Rule { return (tree: Tree, _context: SchematicContext) => { _context.logger.info('Running update schematic ...'); // Hardcoded path for the sake of simplicity const appModule = './src/app/app.module.ts'; const buffer = tree.read(appModule); if (!buffer) return tree; const content = buffer.toString('utf-8'); // One more time, this is for the sake of simplicity const newContent = content.replace('LoggerModule.forRoot(', 'LoggerModule.configure('); tree.overwrite(appModule, newContent); return tree; }; } For the sake of simplicity, I'm taking two short cuts here. First, the rule assumes that the AppModule is located in the file ./src/app/app.module.ts. While this might be the case in a traditional Angular CLI project, one could also use a completely different folder structure. One example is a monorepo workspace containing several applications and libraries. I will present a solution for this in an other post but for now, let's stick with this simple solution. To simplify things further, I'm directly modifying this file using a string replacement. A more safe way to change existing code is going with the TypeScript Compiler API. If you're interested into this, you'll find an example for this in my blog post here. Configuring the Migration Schematic To configure migration schematics, let's follow the advice from the underlying design document and create an own collection. This collection is described by an migration-collection.json file: For each migration, it gets a schematic. The name of this schematic doesn't matter but what matters is the version property: { "schematics": { "migration-01": { "version": "4", "factory"[...]



SQL Management Studio: Connect and Queries take so long

Mon, 16 Apr 2018 08:42:50 Z

When I connect to my local SQL Server using SQL Management Studio or SQL Operations Studio and want to execute queries, it sometimes takes several minutes to execute. This setting in Internet Explorer’s Internet Options solved my problem: Open Internet Explorer Go to Tools -> Internet option Open the „Advanced“ tab Uncheck „Check for server […]



A generic logger factory facade for classic ASP.NET

Fri, 13 Apr 2018 00:00:00 Z

ASP.NET Core already has this feature. There is a ILoggerFactory to create a logger. You are able to inject the ILoggerFactory to your component (Controller, Service, etc.) and to create a named logger out of it. During testing you are able to replace this factory with a mock, to not test the logger as well and to not have an additional dependency to setup. Recently we had the same requirement in a classic ASP.NET project, where we use Ninject to enable dependency injection and log4net to log all the stuff we do and all exceptions. One important requirement is a named logger per component. Creating named loggers Usually log4net gets created inside the components as a private static instance: private static readonly ILog _logger = LogManager.GetLogger(typeof(HomeController)); There already is a static factory method to create a named logger. Unfortunately this isn't really testable anymore and we need a different solution. We could create a bunch of named logger in advance and register them to Ninject, which obviously is not the right solution. We need to have a more generic solution. We figured out two different solutions: // would work well public MyComponent(ILoggerFactory loggerFactory) { _loggerA = loggerFactory.GetLogger(typeof(MyComponent)); _loggerB = loggerFactory.GetLogger("MyComponent"); _loggerC = loggerFactory.GetLogger(); } // even more elegant public MyComponent( ILoggerFactory loggerFactoryA ILoggerFactory loggerFactoryB) { _loggerA = loggerFactoryA.GetLogger(); _loggerB = loggerFactoryB.GetLogger(); } We decided to go with the second approach, which is a a simpler solution. This needs a dependency injection container that supports open generics like Ninject, Autofac and LightCore. Implementing the LoggerFactory Using Ninject the binding of open generics looks like this: Bind(typeof(ILoggerFactory<>)).To(typeof(LoggerFactory<>)).InSingletonScope(); This binding creates an instance of LoggerFactory using the requested generic argument. If I request for an ILoggerFactory, Ninject creates an instance of LoggerFactory. We register this as an singleton to reuse the ILog instances as we would do using the usual way to create the ILog instance in a private static variable. The implementation of the LoggerFactory is pretty easy. We use the generic argument to create the log4net ILog instance: public interface ILoggerFactory { ILog GetLogger(); } public class LoggerFactory : ILoggerFactory { private ILog _logger; public ILog GetLogger() { if (_logger == null) { var type = typeof(T); _logger = LogManager.GetLogger(typeof(T)); } return _logger; } } We need to ensure the logger is created before creating a new one. Because Ninject creates a new instance of the LoggerFactory per generic argument, the LoggerFactory don't need to care about the different loggers. It just stores a single specific logger. Conclusion Now we are able to create one or more named loggers per component. What we cannot do, using this approach is to create individual named loggers, using a specific string as a name. There is a type needed that gets passed as generic argument. So every time we need an individual named logger we need to create a specific type. In our case this is not a big problem. If you don't like to create types just to create individual named loggers, feel free to implement a non generic LoggerFactory and make a generic GetLogger method as well as a GetLogger method that accepts strings as logger names.[...]



Creating Dummy Data Using GenFu

Wed, 11 Apr 2018 00:00:00 Z

Two years ago I already wrote about playing around with GenFu and I still use it now, as mentioned in that post. When I do a demo, or when I write blog posts and articles, I often need dummy data and I use GenFu to create it. But every time I use it in a talk or a demo, somebody still asks me a question about it,

Actually I really forgot about that blog post and decided to write about it again this morning because of the questions I got. Almost accidently I stumbled upon this "old" post.

I wont create a new one. Now worries ;-) Because of the questions I just want to push this topic a little bit to the top:

Playing around with GenFu

GenFu on GitHub

PM> Install-Package GenFu

Read about it, grab it and use it!

It is one of the most time saving tools ever :)

(image)



Die Windows-Update-Endlosschleife und der Microsoft-Support

Tue, 03 Apr 2018 12:00:00 +0200

Windows 10 Update 1709 installiert nicht und der Microsoft-Support hat auch keine Lösung beziehungsweise gibt sich nicht viel Mühe, eine Lösung zu finden.



Kollegen-Bashing – Überraschung, es hilft nicht!

Sun, 01 Apr 2018 17:36:13 Z

Bei allen Konferenzen, die meinen cleverbridge-Kollegen und ich besuchen, poppt früher oder später das Thema Team-Kultur auf, als Grund von vielen/allen Problemen. Wenn wir erzählen, wie wir arbeiten, landen wir unausweichlich bei der Aussage “eine selbstorganisierte crossfunktionale Organisation ohne einen Chef, der DAS Sagen hat, ist naiv und nicht realistisch“. “Ihr habt irgendwo sicher einen Chef, … Continue reading Kollegen-Bashing – Überraschung, es hilft nicht!



Did you know that you can run ASP.NET Core 2 under the full framework?

Sat, 31 Mar 2018 23:35:00 Z

This post might be obvious for some, but I really struggled a couple of month ago and I’m not sure if a Visual Studio Update fixed the problem for me or if I was just blind… The default way: Running .NET Core AFAIK the framework dropdown in the normal Visual Studio project template selector (the first window) is not important and doesn’t matter anyway for .NET Core related projects. When you create a new ASP.NET Core application you will see something like this: The important part for the framework selection can be found in the upper left corner: .NET Core is currently selected. When you continue your .csproj file should show something like this: netcoreapp2.0 Running the full framework: I had some trouble to find the option, but it’s really obvious. You just have to adjust the selected framework in the second window: After that your .csproj has the needed configuration. net461 The biggest change: When you run under the full .NET Framework you can’t use the “All”-Meta-Package, because with version 2.0 the package is still .NET Core only, and need to point to each package manually. Easy, right? Be aware: Maybe with ASP.NET Core 2.1 the Meta-Package story with the full framework might get easier. I’m still not sure why I struggled to find this option… Hope this helps![...]



Running and Coding

Fri, 30 Mar 2018 00:00:00 Z

I wasn't really sporty before two years, but anyway active. I was also forced to be active with three little kids and a sporty and lovely women. But anyway, a job where I mostly sit in a comfortable chair, even great food and good southern German beers also did its work. When I first met my wife, I had around 80 Kg, what is good for my size of 178cm. But my weight increased up to 105Kg until Christmas 2015. This was way too much I thought. Until then I always tried to reduce it by doing some more cycling, more hiking and some gym, but it didn't really worked out well. Anyway, there is not a more effective way to loose weight than running. It is btw. tree times more effective than cycling. I tried it a lot in the past, but it pretty much hurts in the lower legs and I stopped it more than once. Running the agile way I tried it again in Easter 2016 in a little different way, and it worked. I tried to do it the same way as in a perfect software project: I did it in an agile way, using pretty small goals to get as much success as possible. Also I bought me fitness watch to count steps, calories, levels and to measure the hart rate while running, to get some more challenges to do. At the same time I changed food a lot. It sounds weird and funny, but it worked really well. I lost 20Kg since then! I think it was important to not set to huge goals. I just wanted to loose 20Kg. I didn't set a time limit, or something like this. I knew it hurts in the lower legs while running. I started to learn a lot of running and the different stiles of running. I chose the way of easy running which worked pretty well with natural running shoes and barefoot shoes. This also worked well for me. Finding time to run Finding time was the hardest thing. In the past I always thought that I'm too busy to run. I discussed it a lot with the family and we figured out the best time to run was during lunch time, because I need to walk the dog anyway and this also was an option to run with the dog. This was also a good thing for our huge dog. Running at lunch time had another good advantage: I get the brain cleaned a little bit after four to five hours of work. (Yes, I usually start between 7 to 8 in the morning.) Running is great when you are working on software projects with a huge level of complexity. Unfortunately when I'm working in Basel, I cannot go run, because there is now shower available. But I'm still able to run three to four times a week. Starting to run The first runs were a real pain. I just chose a small lap of 2,5km, because I needed to learn running as the first step. Also because of the pain in the lower legs, I chose to run shorter tracks up-hill. Why up-hill? Because this is more exhausting than running leveled-up. So I had short up-hill running phases and longer quick walking phases. Just a few runs later the running phases start to be a little bit longer and longer. This was the first success just a few runs later. That was great. it was even greater when I finished my first kilometer after 1,5 months running every second day. That was amazing. On every run there was a success and that really pushed me. But I not only succeeded on running, I also started to loose weight, which pushed me even more. So the pain wasn't too hard and I continued running. Some weeks later I ran the entire lap of 2.5km. I was running the whole lap not really fast but without a walking pause. Some more motivation. I continued running just this 2.5km for a few more weeks to get some success on personal records on this lap. Low carb I mentioned the change with food. I changed to low-carb diet. Which is in general a way to reduce the consumption of sugar. Every kind of sugar, which means bread, potatoes, pasta, rice and corn as well. In the first phase of three months I almost completely stopp[...]



Why I use paket now

Thu, 29 Mar 2018 00:00:00 Z

I never really had any major problem using the NuGet client. By reading the Twitter timeline, it seems I am the only one without problems. But depending on what dev process you like to use, there could be a problem. This is not really a NuGet fault, but this process makes the usage of NuGet a little bit more complex than it should be. As mentioned in previous posts, I really like to use Git Flow and the clear branching structure. I always have a production branch, which is the master. It contains the sources of the version which is currently in production. In my projects I don't need to care about multiple version installed on multiple customer machines. Usually as a web developer, you only have one production version installed somewhere on a webserver. I also have a next version branch, which is the develop branch. This contains the version we are currently working on. Besides of this, we can have feature branches, hotfix branches, release branches and so on. Read more about Git Flow in this pretty nice cheat sheet. The master branch get's compiled in release mode and uses a semantic version like this. (breaking).(feature).(patch). The develop branch, get's compiled in debug mode and has an version number that tells NuGet that it is a preview version: (breaking).(feature).(patch)-preview(build). Where build is the build number generated by the build server. The actual problem We use this versioning, build and release process for web projects and shared libraries. And with those shared libraries it starts to get complicated using NuGet. Some of the shared libraries are used in multiple solutions and shared via a private NuGet feed, which is a common way, I think. Within the next version of a web project we also use the next versions of the shared libraries to test them. In the current versions of the web projects we use the current versions of the shared libraries. Makes kinda sense, right? If we do a new production release of a web project, we need to switch back to the production version of the shared libraries. Because in the solutions packages folder, NuGet creates package sub-folders containing the version number. And the project references the binaries from those folder. Changing the library versions, needs to use the UI or to change the packages.config AND the project files, because the reference path contains the version information. Maybe switching the versions back and forth doesn't really makes sense in the most cases, but this is the way how I also try new versions of the libraries. In this special case, we have to maintain multiple ASP.NET applications, which uses multiple shared libraries, which are also dependent to different versions of external data sources. So a preview release of an application also goes to a preview environment with a preview version of a database, so it needs to use the preview versions of the needed libraries. While releasing new features, or hotfixes, it might happen that we need to do an release without updating the production environments and the production databases. So we need to switch the dependencies back to the latest production version of the libraries . Paket solves it Paket instead only supports one package version per solution, which makes a lot more sense. This means Paket doesn't store the packages in a sub-folder with a version number in its name. Changing the package versions is easily done in the paket.dependencies file. The reference paths don't change in the project files and the projects immediately use the other versions, after I changed the version and restored the packages. Paket is an alternative NuGet client, developed by the amazing F# community. Paket works well Fortunately Paket works well with MSBuild and CAKE. Paket provides MSBuild targets to automati[...]



Recap the MVP Global Summit 2018

Wed, 28 Mar 2018 00:00:00 Z

Being a MVP has a lot of benefits. Getting free tools, software and Azure credits are just a few of them. The direct connection to the product group has a lot more value than all software. Even more valuable is the is the fact of being a part of an expert community with more than 3700 MVPs from around the world. In fact there are a lot more experts outside the MVP community which are also contributing to the communities of the Microsoft related technologies and tools. Being an MVP also means to find those experts and to nominate them to also get the MVP award. The most biggest benefit of being an MVP is the yearly MVP Global Summit in Redmond. Also this year Microsoft invites the MVPs to attend the MVP Global Summit. More than 2000 MVPs and Regional Directors were registered to attend the summit. I also attended the summit this year. It was my third summit and the third chance to directly interact with the product group and with other MVPs from all over the world. The first days in Seattle My journey to the summit starts at Frankfurt airport where a lot of German, Austrian and Swiss MVPs start their journey and where many more MVPs from Europe change the plain. The LH490 and LH491 flights around the summits are called the "MVP plains" because of this. This always feels like a yearly huge school trip. The flight was great, sunny the most time and I had an impressive view over Greenland and Canada: After we arrived at SEATEC, some German MVP friends and me took the train to Seattle downtown. We checked in at the hotels and went for a beer and a burger. This year I decided to arrive one day earlier than the last years and to stay in Seattle downtown for the first two nights and the last two nights. This was a great decision. I spent the nights just a few steps away from the pike place. I really love the special atmosphere at this place and this area. There are a lot of small stores, small restaurants, the farmers market and the breweries. Also the very first Starbucks restaurant is at this place. It's really a special place. This also allows me to use the public transportation, which works great in Seattle. There is a direct train from the airport to Seattle downtown and an express bus from Seattle downtown to the center of Bellevue where the conference hotels are located. For those of you, who don't want to spent 40USD or more for Uber, Taxy or a Shuttle, the train to Seattle costs 3USD and the express bus 2,70USD. Both need around 30 minutes, maybe you need some time to wait a few minutes in the underground station in Seattle. The Summit days After checking-in into the my conference hotel on Sunday morning, I went to the registration, but it seemed I was pretty early: But it wasn't really right. The most of the MVPs where in the queue to register for the conference and to get their swag. Like the last years, the summit days where amazing, even if we don't really learn a lot of really new things in my contribution area. The most stuff in the my MVP category is open source and openly discussed on GitHub and Twitter and in the blog posts written by Microsoft. Anyway we learned about some cool ideas, which I unfortunately cannot write down here, because it is almost all NDA content. So the most amazing things during the summit are the events and parties around the conference and to meet all the famous MVPs and Microsoft employees. I'm not really a selfie guy, but this time I really needed to take a picture with the amazing Phil "Mister ASP.NET MVC" Haack. I'm also glad to met Steve Gorden, Andrew Lock, David Pine, Damien Bowden, Jon Galloway, Damien Edwards, David Fowler, Immo Landwerth, Glen Condron, and many, many more. And of course the German speaking MVP Family from Germa[...]



IEC 61131-3: Das ‘Observer’ Pattern

Mon, 26 Mar 2018 20:36:00 Z

Das Observer Pattern ist für Anwendungen geeignet, in denen gefordert wird, dass ein oder mehrere Funktionsblöcke benachrichtigt werden, sobald sich der Zustand eines bestimmten Funktionsblocks verändert. Hierbei ist die Zuordnung der Kommunikationsteilnehmer zur Laufzeit des Programms veränderbar. In nahezu jedem IEC 61131-3 Programm tauschen Funktionsblöcke Zustände miteinander aus. Im einfachsten Fall, wird einem Eingang eines […]



Community-Konferenz in Madgeburg im April ab 40 Euro pro Tag

Thu, 22 Mar 2018 16:28:00 +0100

Die Magdeburger Developer Days gehen in die dritte Auflage und dieses Mal auch dreitägig vom 9. bis 11. April 2018.



GroupBy funktioniert in Entity Framework Core 2.1 Preview 1 immer noch nicht so ganz

Tue, 20 Mar 2018 12:02:00 +0100

Aggregatoperatoren wie Min(), Max(), Sum() und Average() funktionieren, nicht aber Count().



Custom Schematics - Part IV: Frictionless Library Setup with the Angular CLI and Schematics

Tue, 20 Mar 2018 00:00:00 +0100

Table of Contents This blog post is part of an article series. Part I: Generating Custom Code With The Angular CLI And Schematics Part II: Automatically Updating Angular Modules With Schematics And The CLI Part III: Extending Existing Code With The TypeScript Compiler APIPart IV: Frictionless Library Setup with the Angular CLI and Schematics Thanks a lot to Hans Larsen from the Angular CLI team for reviewing this article. It's always the same: After npm installing a new library, we have to follow a readme step by step to include it into our application. Usually this involves creating configuration objects, referencing css files, and importing Angular Modules. As such tasks aren't fun at all it would be nice to automate this. This is exactly what the Angular CLI supports beginning with Version 6 (Beta 5). It gives us a new ng add command that fetches an npm package and sets it up with a schematic -- a code generator written with the CLI's scaffolding tool Schematics. To support this, the package just needs to name this schematic ng-add. In this article, I show you how to create such a package. For this, I'll use ng-packagr and a custom schematic. You can find the source code in my GitHub account. If you haven't got an overview to Schematics so far, you should lookup the well written introduction in the Angular Blog before proceeding here. Goal To demonstrate how to leverage ng add, I'm using an example with a very simple logger library here. It is complex enough to explain how everything works and not indented for production. After installing it, one has to import it into the root module using forRoot: [...] import { LoggerModule } from '@my/logger-lib'; @NgModule({ imports: [ [...], LoggerModule.forRoot({ enableDebug: true }) ], [...] }) export class AppModule { } As you see in the previous listing, forRoot takes a configuration object. After this, the application can get hold of the LoggerService and use it: [...] import { LoggerService } from '@my/logger-lib'; @Component({ selector: 'app-root', templateUrl: './app.component.html' }) export class AppComponent { constructor(private logger: LoggerService) { logger.debug('Hello World!'); logger.log('Application started'); } } To prevent the need for importing the module manually and for remembering the structure of the configuration object, the following sections present a schematic for this. Schematics is currently an Angular Labs project. Its public API is experimental and can change in future. Getting Started To get started, you need to install version 6 of the Angular CLI. Make sure to fetch Beta 5 or higher: npm i -g @angular/cli@~6.0.0-beta You also need the Schematics CLI: npm install -g @angular-devkit/schematics-cli The above mentioned logger library can be found in the start branch of my sample: git clone https://github.com/manfredsteyer/schematics-ng-add cd schematics-ng-add git checkout start After checking out the start branch, npm install its dependencies: npm install If you want to learn more about setting up a library project from scratch, I recommend the resources outlined in the readme of ng-packagr. Adding an ng-add Schematic As we have everything in place now, let's add a schematics project to the library. For this, we just need to run the blank Schematics in the project's root: schematics blank --name=schematics This generates the following folder structure: The folder src/schematics contains an empty schematic. As ng add looks for an ng-add schematic, let's rename it: In the index.ts file in the ng-add folder we find a factory function. It returns a Rule [...]



Neue Azure Regions

Wed, 14 Mar 2018 16:30:14 Z

Microsoft hat heute eine starke Erweiterung Ihrer Azure Rechenzentren in Europa bekannt gegeben. Es wurden zwei neue Azure Regionen in der Schweiz angekündigt, zwei weitere Regionen in Deutschland und die Inbetriebnahme der beiden fertiggestellten Regionen in Frankreich. Die beiden zusätzlichen Regionen in Deutschland werden, anders als die existierenden, Teil der internationalen Azure Rechenzentren sein. Eine...



Erste Preview-Version von .NET Core 2.1 & Co.

Mon, 12 Mar 2018 09:49:00 +0100

Ganz knapp zum Ende des von Microsoft geplanten Zeitraums ist dann doch noch die erste Preview-Version am 27. Februar erschienen.



Running and Coding

Mon, 12 Mar 2018 00:00:00 Z

I wasn't really sporty before one and a half years, but anyway active. I was also forced to be active with three little kids and a sporty and lovely women. But anyway, a job where I mostly sit in a comfortable chair, even great food and good southern German beers also did its work. When I first met my wife, I had around 80 Kg, what is good for my size of 178cm. But my weight increased up to 105Kg until Christmas 2015. This was way too much I thought. Until then I always tried to reduce it by doing some more cycling, more hiking and some gym, but it didn't really work well. Anyway, there is not a more effective way to loose weight than running. It is btw. tree times more effective than cycling. I tried it a lot in the past, but it pretty much hurts in the lower legs and I stopped it. Running the agile way I tried it again in Easter 2016 in a little different way, at it worked. I tried to do it the same way as in a perfect software project: I did it in an agile way, using pretty small goals to get as much success as possible. Also I bought me fitness watch to count steps, calories, levels and to measure the hart rate while running, to get some more challenges to do. At the same time I changed food a lot. It sounds weird and funny, but it worked really well. I lost 20Kg since then! I think it was important to not set to huge goals. I just wanted to loose 20Kg. I didn't set a time limit, or something like this. I knew it hurts in the lower legs while running. I started to learn a lot of running and the different stiles of running. I chose the way of easy running which worked pretty well with natural running shoes and barefoot shoes. This also worked well for me. Finding time to run Finding time was the hared thing. I discussed it a lot with the family and we figured out the best time to run was during lunch time, because I need to walk the dog anyway and this also was an option to run with the dog. This was also a good thing for our huge dog. Running at lunch time had another good advantage: I get the brain cleaned a little bit after four to five hours of work. (Yes, I start between 7 or 8 in the morning.) Running is great when you working on software projects with a huge level of complexity. Unfortunately when I'm working in Basel, I cannot go run, because there is now shower available. Starting to run The first runs ware a real pain. I just chose a small lap of 2,5Km, because I needed to learn running as the first step. Also because of the pain in the lower legs, I chose to run, shorter tracks up-hill. Why? Because this is more exhausting than running straight. So I had short up-hill running phases and longer quick walking phases. Just a few runs later the running phases start to be a little bit longer and longer. This was the first success, just a few runs later. That was great. it was even greater when I finished my first kilometer after 1,5 months running every second day. That was amazing. On every run, there was an success. That really pushed me. But not only running succeeded, I also started to loose weight, which pushed me even more. So the pain wasn't too hard. and I continued running. Some weeks later I ran the entire lap of 2.5Km. Not really fast, but I was running the whole lap, without a walking pause. Some more motivation. I continued running just this 2.5Km for a few more weeks to get some success on personal records on this lap. Low carb I mentioned the change with food. I changed to low-carb diet. Which is in general a way to reduce the consumption of sugar. Every kind of sugar, which means bread,[...]



Creating a chat application using React and ASP.NET Core - Part 5

Tue, 06 Mar 2018 00:00:00 Z

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics: React Chat Part 1: Requirements & Setup React Chat Part 2: Creating the UI & React Components React Chat Part 3: Adding Websockets using SignalR React Chat Part 4: Authentication & Storage React Chat Part 5: Deployment to Azure I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks. Intro In this post I will write about the deployment of the app to Azure App Services. I will use CAKE to build pack and deploy the apps, both the identity server and the actual app. I will run the build an AppVeyor, which is a free build server for open source projects and works great for projects hosted on GitHub. I'll not go deep into the AppVeyor configuration, the important topics are cake and azure and the app itself. BTW: SignalR was going into the next version the last weeks. It is not longer alpha. The current version is 1.0.0-preview1-final. I updated the version in the package.json and in the ReactChatDemo.csproj. Also the NPM package name changed from "@aspnet/signalr-client" to "@aspnet/signalr". I needed to update the import statement in the WebsocketService.ts file as well. After updating SignalR I got some small breaking changes, which are easily fixed. (Please see the GitHub repo, to learn about the changes.) Setup CAKE CAKE is a build DSL, that is built on top of Roslyn to use C#. CAKE is open source and has a huge community, who creates a ton of add-ins for it. It also has a lot of built-in features. Setting up CAKE is easily done. Just open the PowerShell and cd to the solution folder. Now you need to load a PowerShell script that bootstraps the CAKE build and loads more dependencies if needed. Invoke-WebRequest https://cakebuild.net/download/bootstrapper/windows -OutFile build.ps1 Later on, you need to run the build.ps1 to start your build script. Now the Setup is complete and I can start to create the actual build script. I created a new file called build.cake. To edit the file it makes sense to use Visual Studio Code, because @code also has IntelliSense. In Visual Studio 2017 you only have syntax highlighting. Currently I don't know an add-in for VS to enable IntelliSense. My starting point for every new build script is, the simple example from the quick start demo: var target = Argument("target", "Default"); Task("Default") .Does(() => { Information("Hello World!"); }); RunTarget(target); The script then gets started by calling the build.ps1 in a PowerShell .\build.ps1 If this is working I'm able to start hacking the CAKE script in. Usually the build steps I use looks like this. Cleaning the workspace Restoring the packages Building the solution Running unit tests Publishing the app In the context of non-web application this means packaging the app Deploying the app To deploy the App I use the CAKE Kudu client add-in and I need to pass in some Azure App Service credentials. You get this credenti[...]



Mobile Developer After-Work #17

Mon, 05 Mar 2018 19:17:27 Z

Mobile Developer After-Work #17 Progressive Web Apps, Blockchain und Mixed Reality Mittwoch, 21 März 2018, 17:30 – 21:30 Raum D, Museumsquartier, 1070 Wien Jetzt anmelden! Die Technik-Welt wird gerade von 3 Themen beherrscht. Beim #mdaw17 erfährst du, was dahintersteckt: Wie erstellt man eine Progressive Web App in 30 Minuten? Was bieten Blockchains für seriöse Geschäftsfälle? Mixed...



Computer Vision Hackfest in Paris

Sun, 04 Mar 2018 08:00:00 Z

Für alle die sich mit Computer Vision Technologien beschäftigen bieten wir vom 18. – 20. April in Paris eine tolle Gelegenheit um gemeinsam mit Software Engineers der CSE (Commercial Software Engineering) am eigenen computer vision-based Scenario / Projekt weiterzuarbeiten. Unsere Experten können Euch unterstützen einen Prototyp Eurer Lösung zu erstellen. Im Rahmen des Hackfests zeigen wir Euch wie ihr Microsoft AI/Open Source Technologien, Azure Machine Learning, Cognitive Services, und andere Microsoft Angebote nutzen könnt, um Euer Projekt umzusetzen.



Artificial Intelligence Hackfest in Belgien

Fri, 02 Mar 2018 15:53:03 Z

Vom 9. – 13. April veranstalten wir in Belgien ein Artificial Intelligence Hackfest. Für alle die sich schon mit dem Thema AI beschäftigen eine tolle Gelegenheit um gemeinsam mit Software Engineers der CSE (Commercial Software Engineering) am eigenen Projekt weiterzuarbeiten. Unsere Experten können Euch unterstützen einen Prototyp Eurer AI Lösung zu erstellen. Im Rahmen des Hackfests zeigen wir Euch wie ihr Azure Machine Learning, Cognitive Services, und andere Microsoft AI Angebote nutzen könnt, um Eure Firmen Daten in Intelligente Erkenntnisse umzusetzen.



Creating a chat application using React and ASP.​NET Core - Part 4

Thu, 01 Mar 2018 00:00:00 Z

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics: React Chat Part 1: Requirements & Setup React Chat Part 2: Creating the UI & React Components React Chat Part 3: Adding Websockets using SignalR React Chat Part 4: Authentication & Storage React Chat Part 5: Deployment to Azure I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks. Intro My idea about this app is to split the storages, between a storage for flexible objects and immutable objects. The flexible objects are the users and the users metadata in this case. Immutable objects are the chat message. The messages are just stored one by one and will never change. Storing a message doesn't need to be super fast, but reading the messages need to be as fast as possible. This is why I want to go with the Azure Table Storage. This is one of the fastest storages on Azure. In the past, at the YooApps, we also used it as an event store for CQRS based applications. Handling the users doesn't need to be super fast as well, because we only handle one user at one time. We don't read all of the users at one blow, we don't do batch operations on it. So using a SQL Storage with IdentityServer4on e.g. a Azure SQL Database should be fine. The users online will be stored in memory only, which is the third storage. The memory is save in this case, because, if the app shuts down, the users need to logon again anyway and the list of users online gets refilled. And it is even not really critical, if the list of the users online is not in sync with the logged on users. This leads into three different storages: Users: Azure SQL Database, handled by IdentityServer4 Users online: Memory, handled by the chat app A singleton instance of a user tracker class Messages: Azure Table Storage, handled by the chat app Using the SimpleObjectStore and the Azure table Storage provider Setup IdentityServer4 To keep the samples easy, I do the logon of the users on the server side only. (I'll go through the SPA logon using React and IdentityServer4 in another blog post.) That means, we are validating and using the senders name on the server side - in the MVC controller, the API controller and in the SignalR Hub - only. It is recommended to setup the IdentityServer4 in a separate web application. We will do it the same way. So I followed the quickstart documentation on the IdentityServer4 web site, created a new empty ASP.NET Core project and added the IdentiyServer4 NuGet packages, as well as the MVC package and the StaticFile package. I first planned to use ASP.NET Core Identity with the IdentityServer4 to store the identities, but I changed that, to keep the samples simple. Now I only use the in-memory configuration, you can see in the quickstart tutorials, I'm able to use ASP.NET Identity or any other custom SQL storage implementation later on. I also copied the IdentityServer4 UI code from the I[...]



Use Azure Automation for creating Resource Groups despite having limited permissions only

Wed, 28 Feb 2018 09:37:44 Z

In Azure-related engagements I often observe that Azure users are only assigned Contributor-role at one single Azure Resource Group. In general, motivation for this is… Users can provision resources within this Resource Group Users are protected from accidentally deleting resources in other Resource Groups (e.g. settings of a Virtual Network, security settings, …) On the...



Windows Fall Creators Update 1709 and Docker Windows Containers

Tue, 27 Feb 2018 23:35:00 Z

Who shrunk my Windows Docker image? We started to package our ASP.NET/WCF/Full-.NET Framework based web app into Windows Containers, which we then publish to the Docker Hub. Someday we discovered that one of our new build machines produced Windows Containers only half the size: Instead of a 8GB Docker image we only got a 4GB Docker image. Yeah, right? The problem with Windows Server 2016 I was able to run the 4GB Docker image on my development machine without any problems and I thought that this is maybe a great new feature (it is… but!). My boss then told my that he was unable to run this on our Windows Server 2016. The issue: Windows 10 Fall Creators Update After some googling around we found the problem: Our build machine was a Windows 10 OS with the most recent “Fall Creators Update” (v1709) (which was a bad idea from the beginning, because if you want to run Docker as a Service you will need a Windows Server!). The older build machine, which produced the much larger Docker image, was running with the normal Creators Update from March(?). Docker resolves the base images for Windows like this: If you pull the ASP.NET Docker image from a Windows 10 Client OS with the Fall Creators Update you will get this 4.7.1-windowsservercore-1709 image If you pull it from a Windows Server 2016 or a older Windows 10 Client OS you will get this 4.7.1-windowsservercore-ltsc2016 image Compatibility issue As it turns out: You can’t run the smaller Docker images on Windows Server 2016. Currently it is only possible to do it via the preview “Windows Server, version 1709” or on the Windows 10 Client OS. Oh… and the new Windows Server is not a simple update to Windows Server 2016, instead it is a completely new version. Thanks Microsoft. Workaround Because we need to run our images on Windows Server 2016, we just target the LTSC2016 base image, which will produce 8GB Docker images (which sucks, but works for us). Further Links: This post could also be in the RTFM-category, because there are some notes on the Docker page available, but it was quite easy to overread ;) Preview Docker for Windows Server 1709 and Windows 10 Fall Creators Update FAQ on Windows Server, version 1709 and Semi-Annual Channel Stefan Scherer has some good information on this topic as well [...]



Global Azure Bootcamp, 21. April 2018, Linz

Mon, 26 Feb 2018 08:37:59 Z

Am 21. April 2018 werden im Rahmen des Global Azure Bootcamps auf der ganzen Welt hunderte Workshops zum Thema Cloud Computing und Microsoft Azure stattfinden. Die letzten Jahre waren ein voller Erfolg und daher sind wir 2018 auch in Österreich wieder im Wissensturm in Linz mit dabei. 2017 haben wir erstmals deutlich die 140-Teilnehmer-Grenze gesprengt. Diesen Rekord wollen wir heuer toppen und hoffen, dass wieder...



Advanced Developers Conference C++ / ADC++ 2018 in Burghausen vom 15.-16.05.2018

Fri, 23 Feb 2018 14:30:33 Z

Am 15. und 16. Mai 2018 wird die nächste Advanced Developers Conference C++ stattfinden. Am 14. Mai finden ganztägige Workshops statt.! Diesmal in der Heimatstadt des ppedv Teams in Burghausen unter der bewährten Leitung von Hannes Preishuber. Mit dem Heimvorteil des ppdevs Team darf man gespannt sein was für eine Abendveranstaltung geplant sein wird. Diese haben […]



Beispiele für Zielvereinbarungen

Tue, 20 Feb 2018 10:12:18 Z

Allgemeines Dieser Beitrag soll als kleine Hilfestellung für all diejenigen dienen, die sich mit der Findung von guten Zielen schwertun. Wer wie wir darauf hinarbeitet, dass die Mitarbeiter sich aus der Unternehmensstrategie die individuellen Ziele selbst ableiten und der Vorgesetzte nur noch unterstützende Funktion (z.B. Bereitstellen von Resourcen) ausüben muss, der wird zuerst viel kommunizieren …



Zur Microsoft Build anmelden – der Countdown läuft

Wed, 14 Feb 2018 16:00:00 Z

Die Microsoft Build findet heuer von 7. – 9. Mai in Seattle statt. Die Anmeldung beginnt morgen, den 15. Februar um 18:00. Die Zeit rinnt dahin!! Erfahrungsgemäß ist die Veranstaltung binnen kürzester Zeit ausverkauft. Wer also erst noch seinen Chef fragen muss. sollte das schleunigst tun. Zum Anmelden einfach auf die Webseite der Microsoft Build...



Creating a chat application using React and ASP.​NET Core - Part 3

Tue, 13 Feb 2018 00:00:00 Z

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics: React Chat Part 1: Requirements & Setup React Chat Part 2: Creating the UI & React Components React Chat Part 3: Adding Websockets using SignalR React Chat Part 4: Authentication & Storage React Chat Part 5: Deployment to Azure I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks. About SignalR SignalR for ASP.NET Core is a framework to enable Websocket communication in ASP.NET Core applications. Modern browsers already support Websocket, which is part of the HTML5 standard. For older browser SignalR provides a fallback based on standard HTTP1.1. SignalR is basically a server side implementation based on ASP.NET Core and Kestrel. It uses the same dependency injection mechanism and can be added via a NuGet package into the application. Additionally, SignalR provides various client libraries to consume Websockets in client applications. In this chat application, I use @aspnet/signalr-client loaded via NPM. The package also contains the TypeScript definitions, which makes it easy to use in a TypeScript application, like this. I added the React Nuget package in the first part of this blog series. To enable SignalR I need to add it to the ServiceCollection: services.AddSignalR(); The server part In C#, I created a ChatService that will later be used to connect to the data storage. Now it is using a dictionary to store the messages and is working with this dictionary. I don't show this service here, because the implementation is not relevant here and will change later on. But I use this Service in in the code I show here. This service is mainly used in the ChatController, the Web API controller to load some initial data and in the ChatHub, which is the Websocket endpoint for this chat. The service gets injected via dependency injection that is configured in the Startup.cs: services.AddSingleton(); Web API The ChatController is simple, it just contains GET methods. Do you remember the last posts? The initial data of the logged on users and the first chat messages were defined in the React components. I moved this to the ChatController on the server side: [Route("api/[controller]")] public class ChatController : Controller { private readonly IChatService _chatService; public ChatController(IChatService chatService) { _chatService = chatService; } // GET: api/ [HttpGet("[action]")] public IEnumerable LoggedOnUsers() { return new[]{ new UserDetails { Id = 1, Name = "Joe" }, new UserDetails { Id = 3, Name = "Mary"[...]



TCP-Verbindungen für SQL Server aktivieren via PowerShell

Fri, 09 Feb 2018 17:03:00 +0100

Auf einem Rechner wollte der "SQL Server Configuration Manager" nicht mehr starten, der gebraucht wurde, um das TCP-Protokoll als "Client Protocol" für den Zugriff auf den Microsoft SQL Server zu aktivieren.



Creating a chat application using React and ASP.​NET Core - Part 2

Wed, 07 Feb 2018 00:00:00 Z

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics: React Chat Part 1: Requirements & Setup React Chat Part 2: Creating the UI & React Components React Chat Part 3: Adding Websockets using SignalR React Chat Part 4: Authentication & Storage React Chat Part 5: Deployment to Azure I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks. Basic Layout First let's have a quick look into the hierarchy of the React components in the folder ClientApp. The app gets bootstrapped within the boot.tsx file. This is the first sort of component where the AppContainer gets created and the router is placed. This file also contains the the call to render the react app in the relevant HTML element, which is a div with the ID react-app in this case. It is a div in the Views/Home/Index.cshtml This component also renders the content of the routes.tsx. This file contains the route definitions wrapped inside a Layout element. This Layout element is defined in the layout.tsx inside the components folder. The routes.tsx also references three more components out of the components folder: Home, Counter and FetchData. So it seems the router renders the specific components, depending on the requested path inside the Layout element: // routes.tsx import * as React from 'react'; import { Route } from 'react-router-dom'; import { Layout } from './components/Layout'; import { Home } from './components/Home'; import { FetchData } from './components/FetchData'; import { Counter } from './components/Counter'; export const routes = ; As expected, the Layout component than defines the basic layout and renders the contents into a Bootstrap grid column element. I changed that a little bit to render the contents directly into the fluid container and the menu is now outside the fluid container. This component now contains less code than before.: import * as React from 'react'; import { NavMenu } from './NavMenu'; export interface LayoutProps { children?: React.ReactNode; } export class Layout extends React.Component { public render() { return
{this.props.children}
; } } I also changed the NavMenu component to place the menu on top of the page using the typical Bootstrap styles. (Visit the repository for more details.) My chat g[...]



Kommende Vorträge

Tue, 06 Feb 2018 09:27:00 +0100

Der Dotnet-Doktor wird in den kommenden drei Monaten wieder einige öffentliche Vorträge halten. Hier eine Terminübersicht.



Einführung in React, Folge 6: Wiederverwendung von Code

Mon, 05 Feb 2018 10:47:00 +0100

React-Komponenten lassen sich wiederverwenden, wobei zwischen Präsentations- und Funktionskomponenten unterschieden wird. Dafür gibt es verschiedene Ansätze, unter anderem Container-Komponenten und Komponenten höherer Ordnung. Wie funktionieren sie?



Creating a chat application using React and ASP.​NET Core - Part 1

Mon, 05 Feb 2018 00:00:00 Z

In this blog series, I'm going to create a small chat application using React and ASP.NET Core, to learn more about React and to learn how React behaves in an ASP.NET Core project during development and deployment. This Series is divided into 5 parts, which should cover all relevant topics: React Chat Part 1: Requirements & Setup React Chat Part 2: Creating the UI & React Components React Chat Part 3: Adding Websockets using SignalR React Chat Part 4: Authentication & Storage React Chat Part 5: Deployment to Azure I also set-up a GitHub repository where you can follow the project: https://github.com/JuergenGutsch/react-chat-demo. Feel free to share your ideas about that topic in the comments below or in issues on GitHub. Because I'm still learning React, please tell me about significant and conceptual errors, by dropping a comment or by creating an Issue on GitHub. Thanks. Requirements I want to create a small chat application that uses React, SignalR and ASP.NET Core 2.0. The frontend should be created using React. The backend serves a Websocket end-point using SignalR and some basic Web API end-points to fetch some initial data, some lookup data and to do the authentication (I'll use IdentityServer4 to do the authentication). The project setup should be based on the Visual Studio React Project I introduced in one of the last posts. The UI should be clean and easy to use. It should be possible to use the chat without a mouse. So the focus is also on usability and a basic accessibility. We will have a large chat area to display the messages, with an input field for the messages below. The return key should be the primary method to send the message. There's one additional button to select emojis, using the mouse. But basic emojis should also be available using text symbols. On the left side, I'll create a list of online users. Every new logged on user should be mentioned in the chat area. The user list should be auto updated after a user logs on. We will use SignalR here too. User list using SignalR small area on the left hand side Initially fetching the logged on users using Web API Chat area using SignalR large area on the right hand side Initially fetching the last 50 messages using Web API Message field below the chat area Enter kay should send the message Emojis using text symbols Storing the chat history in a database (using Azure Table Storage) Authentication using IdentityServer4 Project setup The initial project setup is easy and already described in one of the last post. I'll just do a quick introduction here. You can either use visual studio 2017 to create a new project or the .NET CLI dotnet new react -n react-chat-app It takes some time to fetch the dependent packages. Especially the NPM packages are a lot. The node_modules folder contains around 10k files and will require 85 MB on disk. I also added the "@aspnet/signalr-client": "1.0.0-alpha2-final" to the package.json Don'be be confused, with the documentation. In the GitHub repository they wrote the NPM name signalr-client should not longer us[...]



Another GraphQL library for ASP.​NET Core

Fri, 02 Feb 2018 00:00:00 Z

I recently read a interesting tweet by Glenn Block about a GraphQL app running on the Linux Subsystem for Windows:

(image)

It is impressive to run a .NET Core app in Linux on Windows, which is not a Virtual Machine on Windows. I never hat the chance to try that. I just played a little bit with the Linux Subsystem for Windows. The second that came to mind was: "wow, did he use my GraphQL Middleware library or something else?"

He uses different libraries, as you can see in his repository on GitHub: https://github.com/glennblock/orders-graphql

  • GraphQL.Server.Transports.AspNetCore
  • GraphQL.Server.Transports.WebSockets

This libraries are built by the makers of graphql-dotnet. The project is hosted in the graphql-dotnet organization on GitHub: https://github.com/graphql-dotnet/server. They also provide a Middleware that can be used in ASP.NET Core projects. The cool thing about that project is a WebSocket endpoint for GraphQL.

What about the GraphQL middleware I wrote?

Because my GraphQL middleware, is also based on graphql-dotnet, I'm not yet sure whether to continue maintaining this middleware or to retire this project. I'm not yet sure what to do, but I'll try the other implementation to find out more.

I'm pretty sure the contributors of the graphql-dotnet project know a lot more about GraphQL and there library, than I do. Both project will work the same way and will return the same result - hopefully. The only difference is the API and the configuration. The only reason to continue working on the project is to learn more about GraphQL or to maybe provide a better API ;-)

If I retire my project, I would try to contribute to the graphql-dotnet projects.

What do you think? Drop me a comment and tell me.

(image)



WCF Global Fault Contracts

Wed, 31 Jan 2018 23:35:00 Z

If you are still using WCF you might have stumbled upon this problem: WCF allows you to throw certain Faults in your operation, but unfortunatly it is a bit awkward to configure if you want “Global Fault Contracts”. With this solution here it should be pretty easy to get “Global Faults”: Define the Fault on the Server Side: Let’s say we want to throw the following fault in all our operations: [DataContract] public class FoobarFault { } Register the Fault The tricky part in WCF is to “configure” WCF that it will populate the fault. You can do this manually via the [FaultContract-Attribute] on each operation, but if you are looking for a global WCF fault configuration, you need to apply it as a contract behavior like this: [AttributeUsage(AttributeTargets.Interface, AllowMultiple = false, Inherited = true)] public class GlobalFaultsAttribute : Attribute, IContractBehavior { // this is a list of our global fault detail classes. static Type[] Faults = new Type[] { typeof(FoobarFault), }; public void AddBindingParameters( ContractDescription contractDescription, ServiceEndpoint endpoint, BindingParameterCollection bindingParameters) { } public void ApplyClientBehavior( ContractDescription contractDescription, ServiceEndpoint endpoint, ClientRuntime clientRuntime) { } public void ApplyDispatchBehavior( ContractDescription contractDescription, ServiceEndpoint endpoint, DispatchRuntime dispatchRuntime) { } public void Validate( ContractDescription contractDescription, ServiceEndpoint endpoint) { foreach (OperationDescription op in contractDescription.Operations) { foreach (Type fault in Faults) { op.Faults.Add(MakeFault(fault)); } } } private FaultDescription MakeFault(Type detailType) { string action = detailType.Name; DescriptionAttribute description = (DescriptionAttribute) Attribute.GetCustomAttribute(detailType, typeof(DescriptionAttribute)); if (description != null) action = description.Description; FaultDescription fd = new FaultDescription(action); fd.DetailType = detailType; fd.Name = detailType.Name; return fd; } } Now we can apply this ContractBehavior in the Service just like this: [ServiceBehavior(...), GlobalFaults] public class FoobarService ... To use our Fault, just throw it as a FaultException: throw new FaultException(new FoobarFault(), "Foobar happend!"); Client Side On the client side you should now be able to catch this exception just like this: try { ... } catch (Exception ex) { if (ex is FaultException faultException) { if (faultException.Action == nameof(FoobarFault)) { ... } } } Hope this helps! (This old topic was still on my “To-blog” list, even if WCF is quite old, mayb[...]



The ASP.​NET Core React Project

Wed, 31 Jan 2018 00:00:00 Z

In the last post I wrote I had a first look into a plain, clean and lightweight React setup. I'm still impressed how easy the setup is and how fast the loading of a React app really is. Before trying to push this setup into a ASP.NET Core application, it would make sense to have a look into the ASP.NET Core React project. Create the React project You can either use the "File New Project ..." dialog in Visual Studio 2017 or the .NET CLI to create a new ASP.NET Core React project: dotnet new react -n MyPrettyAwesomeReactApp This creates a ready to go React project. The first impression At the first glance I saw the webpack.config, which is cool. I really love Webpack and I love how it works, how it bundles the relevant files recursively and how it saves a lot of time. Also a tsconfig.json is available in the project. This means the React-Code will be written in TypeScript. Webpack compiles the TypeScript into JavaScript and bundles it into an output file, called main Remember: In the last post the JavaScript code was written in ES6 and transpiled using Babel The TypeScript files are in the folder ClientApp and the transpiled and bundled Webpack output gets moved to the wwwroot/dist/ folder. This is nice. The Build in VS2017 runs Webpack, this is hidden in MSBuild tasks inside the project file. To see more, you need to have a look into the project file by right clicking the project and select Edit projectname.csproj You'll than find a ItemGroup with the removed ClientApp Folder: And there are two Targets, which have definitions for the Debug and Publish build defined: