Subscribe: DotNetGerman Bloggers
http://blogs.dotnetgerman.com/mainfeed.aspx
Added By: Feedage Forager Feedage Grade A rated
Language: German
Tags:
angular  build optimizer  build  core  das  die  exec master  master dbo  net core  net  new  server adsi  server  und 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: DotNetGerman Bloggers

DotNetGerman Bloggers



Alle Blogs von DotNetGerman.com



Copyright: Copyright 2004-2014 DotNetGerman.com
 



.NET Conf 2017 heute kostenfrei im Internet

Tue, 19 Sep 2017 09:00:00 +0200

Heute um 17:00 Uhr startet die vierte Ausgabe von Microsofts virtueller .NET-Entwicklerkonferenz.



SharePoint Online Device Access Option fehlt - Lösung: First Release for everyone

Sun, 17 Sep 2017 23:11:09 +0200

In meinen Projekten (Office 365 / 100% Cloud) ist Conditional Access (CA) nicht mehr wegzudenken. Für uns “SharePoint“ Leute hat CA eine spezielle Geschmacksrichtung. Eine der ersten Session-Based CA Policies arbeitet mit SharePoint und ermöglicht so zusätzliche Kontrollen beim Zugriff auf SharePoint. Microsoft hat das ganze in einem Blogpost beschrieben. Jetzt wurde angekündigt, dass es in die nächste Runde geht und die Konfiguration nicht mehr auf den ganzen Tenant zieht, sondern einzelne Site Collections adressiert. In diesem Zuge wollte ich mal wieder meine AAD + SPO Konfiguration gerade ziehen, aber siehe da… es fehlt was! So sieht es im SharePoint Admin Center unter dem Punkt “Device Access” aus:Eigentlich sollte der Dialog so aussehen:Es fehlt der entscheidende Teil:“Control access from devices that aren’t compliant or joined to a domain”oder auf deutsch:“Zugriff von Geräten steuern, die nicht konform oder einer Domäne beigetreten sind”Jetzt stellt sich die Frage warum? Mein Tenant ist auf “First Release”, also sollte es doch funktionieren. Eine kurze Recherche hat mich zum überraschenden Ergebnis geführt:Quelle: Microsoft Support - Control access from unmanaged devicesMein Tenant war tatsächlich für einen Test auf die Option “First release for selected users” eingestellt:LösungWie im Support Dokument beschrieben habe ich das Setting geändert auf die Option “First release for everyone”:Kurze Zeit später (es zieht nicht sofort), konnte ich über das SharePoint Admin Center die gewünschte “Device Access” Konfiguration abschließen.Ich habe in der Vergangenheit immer nach den Unterschieden der beiden “First Release” Einstellungen gesucht. Dieser Fall ist der erste mir bekannte. [...]



Übersichtliche Konfiguration in Entity Framework Core mit IEntityTypeConfiguration

Fri, 15 Sep 2017 11:00:00 +0200

Entity Framework Core 2.0 bietet nun die Schnittstelle IEntityTypeConfiguration, mit der man eine getrennte Konfigurationsklasse für einen einzelnen Entitätstyp implementieren kann.



Teil 1/2: Xamarin Case Study: Smart Energy Management System by Levion

Fri, 15 Sep 2017 07:41:00 Z

Wir wollen euch in den nächsten Monaten von ISVs und Startups berichten, mit denen wir an verschiedenen Technologien arbeiten und auf interessante technische Fragen aufstoßen. Solche Lösungen wollen wir mit euch teilen. Vielleicht hilft es euch bei euren Projekten oder inspiriert für neue Ansätze. Levion Technologies GmbH ist ein Grazer Startup, das eine innovative Lösung im Bereich von Energie Management hat. Die hier zum Teil vorgestellte Lösung besitzt auch eine öffentliche API, sodass Entwickler weitere Lösungen auf ihr aufbauen können.



Infotag: Softwareentwickler-Update 2017/2018 für .NET- und Webentwickler

Tue, 12 Sep 2017 11:10:00 +0200

Eine ganztägige Infoveranstaltung bietet ein Potpourri aktueller Themen für Softwareentwickler im Umfeld von .NET- und Webanwendungen: .NET, .NET Core, C#, Visual Studio, TypeScript, Angular, TFS/VSTS, DevOps, Docker und Cloud.



IEC 61131-3: Parameter transfer via parameter list

Sun, 10 Sep 2017 18:42:00 Z

Parameter lists are an interesting alternative for transferring parameters to PLC libraries. Strictly speaking, these are global constants (VAR_GLOBAL CONSTANT) whose initialization values can be edited in the Library Manager. When declaring arrays, their boundaries must be defined as constants. At the time of compilation, it must be known how large the array should be. […](image)



angular-oauth2-oidc 2.1 released

Sun, 10 Sep 2017 00:00:00 +0100

Over the last weeks, I've worked on version 2.1 of my OpenId Connect (OIDC) certified library angular-oauth2-oidc which allows for implementing Authentication in Angular using external Identity Providers that support OAuth 2 or OIDC. Here are the added features: New Config API (the original one is still supported). This allows putting the whole configuration in an own file: // auth.config.ts import { AuthConfig } from 'angular-oauth2-oidc'; export const authConfig: AuthConfig = { // Url of the Identity Provider issuer: 'https://steyer-identity-server.azurewebsites.net/identity', // URL of the SPA to redirect the user to after login redirectUri: window.location.origin + '/index.html', // The SPA's id. The SPA is registerd with this id at the auth-server clientId: 'spa-demo', // set the scope for the permissions the client should request // The first three are defined by OIDC. The 4th is a usecase-specific one scope: 'openid profile email voucher', } After defining the configuration object, you can pass it to the libraries configure method: import { OAuthService } from 'angular-oauth2-oidc'; import { JwksValidationHandler } from 'angular-oauth2-oidc'; import { authConfig } from './auth.config'; import { Component } from '@angular/core'; @Component({ selector: 'flight-app', templateUrl: './app.component.html' }) export class AppComponent { constructor(private oauthService: OAuthService) { this.configureWithNewConfigApi(); } private configureWithNewConfigApi() { this.oauthService.configure(authConfig); this.oauthService.tokenValidationHandler = new JwksValidationHandler(); this.oauthService.loadDiscoveryDocumentAndTryLogin(); } } New convenience methods in OAuthService to streamline default tasks: setupAutomaticSilentRefresh() loadDiscoveryDocumentAndTryLogin() Single Sign out through Session Status Change Notification according to the OpenID Connect Session Management specs. This means, you can be notified when the user logs out using at the login provider: To use this feature, your Identity Provider needs to support it. You also have to activate it in the configuration: import { AuthConfig } from 'angular-oauth2-oidc'; export const authConfig: AuthConfig = { [...] sessionChecksEnabled: true, sessionCheckIntervall: 3 * 1000 } The optional configuration option sessionCheckIntervall which defaults to 3000 msec defines the interval that is used to check whether the user has logged out at the identity provider. Possibility to define the ValidationHandler, the Config as well as the OAuthStorage via DI [...], providers: [ {provide: AuthConfig, useValue: authConfig }, { provide: OAuthStorage, useClass: DemoStorage }, { provide: ValidationHandler, useClass: JwksValidationHandler }, ], [...] Better structured documentation: [...]



Programmieraufgaben für Bewerber

Sat, 09 Sep 2017 10:26:56 Z

Ich habe in diesem Post beispielhaft eine unserer Programmieraufgaben herausgezogen, die kürzlich unser neuer Mitarbeiter Hr. Jörg Weißbecker gelöst hat. In der Regel ist das Ziel ein möglichst hochwertiges Ergebnis zu programmieren, das z.B. die Prinzipien OCP und SoC solide umsetzt. Zeitdruck ist an der Stelle völlig sekundär, sodass die Bewerber die Aufgaben immer von …(image)



Erste Schritte mit der PowerShell Core unter Ubuntu-Linux

Fri, 08 Sep 2017 08:07:00 +0200

Die plattformunabhängige PowerShell Core hat mittlerweile den Stand "Beta 6" erreicht und basiert auf der fertigen Version 2.0 von .NET Core. Dieser Beitrag zeigt die Installation auf Ubuntu und erste Schritte in der Anwendung.



Homie – Ein Bot der die menschliche Sprache versteht

Thu, 07 Sep 2017 08:25:25 Z

Heute freue ich mich besonders Euch einen Gast Beitrag von Stephan Bisser, präsentieren zu können: Die Begriffe „Smart Homes“, „Smart Vehicles“ oder „Smart Everything“ sind heutzutage allgegenwärtig. Jedes Gerät, welches wir im Alltag einsetzen, muss „smart“ sein, damit wir es überhaupt benutzen. Es gibt heutzutage eine Vielzahl an Möglichkeiten ein Gerät mit Intelligenz zu versehen,...



Secure an Aurelia Single Page App with Azure Active Directory B2C / MSAL

Wed, 06 Sep 2017 21:43:23 Z

If you create a modern web application with an API / REST backend and a Single Page Application (SPA) as your frontend, that you want to run in the internet, you definitely don’t want to handle security / user management on your own. You will want to use a service like Auth0 or Azure Active ...(image)



.NET Core 2.0 mit Visual Studio 2017 nutzen

Fri, 18 Aug 2017 08:46:00 +0200

Wer das am 14. August 2017 erschienene .NET Core 2.0 nutzen will, braucht Visual Studio 2017 Update 3 (interne Versionsnummer: 15.3) und zusätzlich das .NET Core 2.0 SDK.



DDD & Co., Teil 7: CQRS

Wed, 16 Aug 2017 20:29:00 +0200

Das Verarbeiten von Kommandos und Erzeugen der fachlichen Ereignisse ist seit der vergangenen Folge in semantisch sinnvollem Code abgebildet. Auf dem Weg lassen sich die Konsistenz und Integrität effizient sicherstellen, auch das Speichern der Ereignisse passt dank Event Sourcing konzeptionell dazu. Wie steht es um das Lesen der Daten?



IEC 61131-3: UNION erweitern per Vererbung

Wed, 16 Aug 2017 16:05:00 Z

In dem Post IEC 61131-3: Weitere Spracherweiterungen bin ich kurz auf die UNION eingegangen. Ein Leserkommentar hat mich auf die Möglichkeit hingewiesen, dass auch eine UNION per EXTENDS erweitert werden kann. Da dieses die Handhabung einer UNION vereinfacht und die Norm auch nicht darauf hinweist, will ich diese Möglichkeit in einem (sehr) kurzen Post vorstellen. […](image)



Announcing angular-oauth2-oidc, Version 2

Tue, 15 Aug 2017 00:00:00 +0100

Today, I've released a new version of the angular library angular-oauth2-oidc, which allows for implementing token-based Security using OAuth2/ OpenId Connect and JWTs with Angular.

This new major version comes with some breaking changes. You find a list at the beginning of the updated readme. I think they won't affect everyone and even when you are affected you should be able to deal with them quite quickly.

Silent Token Refresh

Silent Refresh was the most requested feature for this library. It is a standard compliant way to refresh your tokens when/ before they expire using implicit flow.

If the application is prepared for it, performing a silent refresh is as easy as this:

this
    .oauthService
    .silentRefresh()
    .then(info => console.debug('refresh ok', info))
    .catch(err => console.error('refresh error', err));

By leveraging the new events observable, an application can automatically perform such a refresh when/ sometime before the current tokens expire:

this
    .oauthService
    .events
    .filter(e => e.type == 'token_expires')
    .subscribe(e => {
        this.oauthService.silentRefresh();
    });

More information about this can be found within the updated readme.

Validating the signature of id_tokens

The library can now directly validate the signature of received id_tokens. For this, just assign a ValidationHandler:

import { JwksValidationHandler } from 'angular-oauth2-oidc';

[...]

this.oauthService.tokenValidationHandler = new JwksValidationHandler();

The JwksValidationHandler shown here uses the JavaScript library jsrasign to validate the signature directly in the browser without the need to call the server.

You can also hook in an own ValidationHandler by implementing an interface.

More Security Checks

Some additional security checks have been added. The library insists on using https now and only makes an exception for localhost. It also validates the received discovery document.

Feedback

If you are using it or if you are trying it out, don't hesitate to send me some feedback -- either directly via my blog or via GitHub.




WPF und MVVM richtig einsetzen – Teil 5

Mon, 14 Aug 2017 17:56:22 Z

Bindings gegen Mengen



Einführung in Node, Folge 23: Child-Prozesse

Mon, 14 Aug 2017 10:21:00 +0200

In zahlreichen Anwendungen besteht die Anforderung, andere Anwendungen als Child-Prozesse zu starten. Zu dem Zweck enthält Node das Modul child_process. Allerdings ist Vorsicht geboten, denn das Modul führt rasch zu plattformabhängigem Code. Wie lässt sich das Problem anders lösen?



Sommer Update der österreichischen PowerShell Community

Fri, 11 Aug 2017 06:00:00 Z

Auch im Sommer sind Roman und Patrick aktiv und haben für Euch die folgenden Infos gesammelt. Was also war im Juli los in der PowerShell Community in Österreich ? Kommende Events: 23. August 2017 – 25.8.2017 Experts Live Europe 1.   September 2017 - Kostenloser Einsteigerworkshop http://www.powershell.co.at/events/powershell-einsteiger-workshop-kostenlos-q32017/ 14. September 2017 Experts Live Café Wien – Anmeldung...



DDD & Co., Teil 6: Vom Modell zum Code

Wed, 09 Aug 2017 11:05:00 +0200

Die Domäne ist modelliert, gespeichert werden Events. Wie lässt sich das Ganze nun von der Theorie in die Praxis überführen? Wie könnte Code aussehen, der die fachliche Modellierung widerspiegelt? Ein Entwurf.



IEC 61131-3: Parameterübergabe per Parameterliste

Tue, 08 Aug 2017 19:25:00 Z

Parameterlisten sind eine interessante Variante, Parameter an SPS-Bibliotheken zu übergeben. Genaugenommen handelt es sich um globale Konstanten (VAR_GLOBAL CONSTANT), deren Initialisierungswerte im Library Manager editierbar sind. Bei der Deklaration von Arrays müssen dessen Grenzen Konstanten sein. Zum Zeitpunkt der Compilierung muss bekannt sein, wie groß das Array anzulegen ist. Der Versuch die Arraygrenzen durch eine […](image)



Visual Studio 2017 – Architektur & Code Analyse

Tue, 08 Aug 2017 18:23:19 Z

Visual Studio bietet umfangreiche Funktionen im Bereich Architektur und Code Analyse über die nicht häufig berichtet wird. Diese Funktionen sind aber ungerechtfertigter Weise "hinter dem Vorhang" und haben auch mit Visual Studio 2017 einige Neuerungen und Weiterentwicklung bekommen.   Echtzeitüberprüfung von Architekturabhängigkeiten: Der Editor überprüft während Änderungen im Code vorgenommen werden ob Architekturvorgaben verletzt werden....



Wie weiß man wann der Jausenwagen da ist?

Tue, 08 Aug 2017 16:57:05 Z

Gastartikel von unserem MVP Thomas Gölles, Fa. Solvion Hallo MoCaDeSyMo! Wie vermutlich in jedem Unternehmen in Österreich gibt es auch bei Solvion unterschiedliche Möglichkeiten zur Gestaltung der Mittagspause. Vom schnellen Weg zur Cafeteria bis zum bestellen bei unterschiedlichen Lieferservices spannt sich ein breiter Bogen verschiedener lukulischer Genüsse. Wir sind dann noch zusätzlich in der glücklichen...



Einführung in Node, Folge 22: CLI-Anwendungen

Tue, 08 Aug 2017 10:37:00 +0200

Node wird in der Regel als Laufzeitumgebung für Webanwendungen verwendet. Doch es eignet sich auch für andere Arten von Anwendungen wie Konsolenprogramme. Das wirft jedoch eine Reihe von Problemen auf, von der Analyse der Parameter bis zur Ausgabe der Hilfe. Wie lässt sich das bewerkstelligen?



Querying AD in SQL Server via LDAP provider

Mon, 07 Aug 2017 00:00:00 Z

This is kinda off-topic, because it's not about ASP.NET Core, but l really like it to share. I recently needed to import some additional user data via a nightly run into a SQL Server Database. The base user data came from a SAP database via an CSV bulk import. But not all of the data. E.g. the telephone numbers are maintained mostly by the users itself in the AD. After SAP import, we need to update the telephone numbers with the data from the AD. The bulk import was done with a stored procedure and executed nightly with an SQL Server job. So it makes sense to do the AD import with a stored procedure too. I wasn't really sure whether this works via the SQL server. My favorite programming languages are C# and JavaScript, and I'm not really a friend of T-SQL, but I tried it. I googled around a little bit and found a solution quick solution in T-SQL. The trick is map the AD via an LDAP provider as a linked server to the SQL Server. This can even be done via a dialogue, but I never got it running like this, so I chose the way to use T-SQL instead: USE [master] GO EXEC master.dbo.sp_addlinkedserver @server = N'ADSI', @srvproduct=N'Active Directory Service Interfaces', @provider=N'ADSDSOObject', @datasrc=N'adsdatasource' EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'ADSI',@useself=N'False',@locallogin=NULL,@rmtuser=N'\',@rmtpassword='*******' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'collation compatible', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'data access', @optvalue=N'true' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'dist', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'pub', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'rpc', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'rpc out', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'sub', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'connect timeout', @optvalue=N'0' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'collation name', @optvalue=null GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'lazy schema validation', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'query timeout', @optvalue=N'0' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'use remote collation', @optvalue=N'true' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'remote proc transaction promotion', @optvalue=N'true' GO You can to use this script to set-up a new linked server to AD. Just set the right user and password to the second T-SQL statement. This user should have read access to the AD. A specific service account would make sense here. Don't save the script with the user credentials in it. Once the linked server is set-up, you don't need this script anymore. This setup was easy. The most painful part, was to setup a working query. SELECT * FROM OpenQuery ( ADSI, 'SELECT cn, samaccountname, mail, mobile, telephonenumber, sn, givenname, co, company FROM ''LDAP://DC=company,DC=domain,DC=controller'' WHERE objectClass = ''User'' and co = ''Switzerland'' ') AS tblADSI WHERE mail IS NOT NULL AND (telephonenumber IS NOT NULL OR mobile IS NOT NULL) ORDER BY cn Any error in the Query t[...]



DDD & Co., Teil 5: Event Sourcing

Wed, 02 Aug 2017 09:45:00 +0200

Wer eine Anwendung mit DDD modelliert, braucht für die Implementierung einen Ansatz zum Umsetzen der Persistenz. Dafür lässt sich eine herkömmliche CRUD-Datenbank verwenden, doch es gibt einen besseren Ansatz: Event Sourcing. Wie funktioniert das?



How to convert .crt & .key files to a .pfx

Mon, 31 Jul 2017 23:15:00 Z

The requirements are simple: You will need the .cer with the corresponding .key file and need to download OpenSSL.

If you are using Windows without the awesome Linux Subsystem, take the latest pre-compiled version for Windows from this site

Otherwise with Bash on Windows you can just use OpenSSL via its “native” environment. Thanks for the hint @kapsiR

After the download run this command:

   openssl pkcs12 -export -out domain.name.pfx -inkey domain.name.key -in domain.name.crt

This will create a domain.name.pfx. As far as I remember you will be asked to set a password for the generated private .pfx part.

If you are confused with .pfx, .cer, .crt take a look at this nice description blogpost.

Hope this helps!

(image)



Einführung in Node, Folge 21: Daten verschlüsseln

Mon, 31 Jul 2017 10:32:00 +0200

Das Verschlüsseln von Daten gewinnt zunehmend an Bedeutung. Node greift zu dem Zweck auf OpenSSL zurück und unterstützt sowohl die symmetrische als auch die asymmetrische Verschlüsselung. Darüber hinaus weiß es auch mit digitalen Signaturen, Hashfunktionen & Co. umzugehen. Wie funktioniert das?



The Angular Build Optimizer under the Hoods

Thu, 27 Jul 2017 00:00:00 +0100

p a {text-decoration: underline} Thanks to Filipe Silva who reviewed this article and to Rob Wormald for a lot of insights regarding this technology. In my last article, I've shown that the Angular Build Optimizer transforms the emitted JavaScript Code to make tree shaking more efficient. To demonstrate this, I've created a simple scenario that includes two modules of Angular Material without using them. After using the Build Optimizer, the CLI/ webpack was able to reduce the bundle size by about the half leveraging tree shaking: If you are wondering how such amazing results are possible, you can find some answers in this article. Please note that when writing this the Angular Build Optimizer is still experimental. Nethertheless, the results show above are very promising. Tree Shaking and Side Effects The CLI uses webpack for the build process and to make tree shaking possible, webpack marks exports that are not used and can therefore be safely excluded. In addition to this, a typical webpack configuration uses UglifyJS for removing these exports. Uglify tries to be on the safe side and does not remove any code that could be need at runtime. For instance, when Uglify finds out that some code could produce side effects, it keeps it in the bundle. Look at the following (artificial and obvious) example that demonstrates this: (function(exports){ exports.pi = 4; // Let's be generous! })(...); Unfortunately, when transpiling ES2015+/TypeScript classes down to ES5, the class declaration results in imperative code and UglifyJS as well as other tools cannot make sure that this code isn't producing side effects. A good discussion regarding this can be found here on GitHub. That's why the code in question stays in the bundle even though it could be removed. To assure myself about this fact, I've created a simple npm based Angular Package using the Jurgen Van de Moere's Yeoman generator for Angular libraries as well as a CLI based application that references it. The package's entry point exports an Angular module with an UnusedComponent that -- as it's name implies -- isn't used by the application. It also exports an UnusedClass. In addition to that, it exports an other UnusedClass as well as an UsedClass from the same file (ES Module). export {UnusedClass} from './unused'; export {OtherUnusedClass, UsedClass} from './partly-used'; export {SampleComponent} from './sample.component'; export {UnusedComponent} from './unused.component'; @NgModule({ imports: [ CommonModule ], declarations: [ SampleComponent, UnusedComponent ], exports: [ SampleComponent, UnusedComponent ] }) export class SampleModule { } When building the whole application without the Angular Build Optimizer, none of the unused classes are tree shaken off. One reason for this is that -- as mentioned above -- Uglify cannot make sure that the transpiled classes don't introduce side effects. Marking pure Code Blocks To compensate for the shown issue, the Angular Build Optimizer marks transpiled class declarations that are not producing side effects with a special /*@__PURE__*/ comment. UglifyJS on the other side respects those comments and removes such code blocks if not referenced. Using this technique, we can get rid of unused classes but not of the unused component. The reason for this is, as the next [...]



Shrinking Angular Bundles with the Angular Build Optimizer

Wed, 26 Jul 2017 17:00:00 +0100

p a { text-decoration: underline; } Thanks to Filipe Silva who reviewed this article and to Rob Wormald for a lot of insights regarding this technology. Also thanks to Sander Elias who gave important feedback. Beginning with version 1.3.0-rc.0, the Angular CLI makes use of the Angular Build Optimizer. This is a nifty tool that transforms the emitted JavaScript code to make tree shaking more efficient. This can result in huge improvements regarding bundle size. In this post I'm describing some simple scenarios that show the potential of this newly introduced tool. If you want to reproduce these scenarios, you find the source code used in my GitHub repository. Please note that the Angular Build Optimizer was still experimental when writing this. Therefore, it is subject to change. Nevertheless, as shown below, it comes with a high potential for shrinking bundles. Scenario 1 To demonstrate the power of the Angular Build Optimizer, I'm using a simple Angular Application I've created with the Angular CLI. After scaffolding, I've added the MdButtonModule and the MdCheckboxModule from Angular Material as well as Angular's NoopAnimationsModule and FormsModule: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { MdButtonModule, MdCheckboxModule } from '@angular/material'; import { NoopAnimationsModule } from '@angular/platform-browser/animations'; import { FormsModule } from "@angular/forms"; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, FormsModule, NoopAnimationsModule, MdButtonModule, MdCheckboxModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Please note that I'm using none of the added modules. I've just added them to find out how good the CLI/ webpack is in combination with the Angular Build Optimizer when it comes to shake them off. After this, I've created a production build without using the Build Optimizer and another one using it. For the latter one, I've leveraged the new --build-optimizer command line option: ng build --prod --build-optimizer The results of this are amazing: As this figure shows, using the Angular Build Optimizer the bundle size after tree shaking is about the half. This fits to my observations I've written down here some months ago: There are situations that prevent tree shaking implementations from doing their job as well as they can. The Build Optimizer seems to compensate this. Scenario 2 After this, I've added one component from each of the two included Angular Material modules to find out whether this is influencing tree shaking: Checked This led to the following results: Of course, both bundles are bigger, because now I'm using more parts of the bundles included. But, as before, using the Angular Build Optimizer our Bundles are about half as big. Scenario 3 Perhaps you are wondering what's the overhead introduced by the two Angular Material modules in the scenarios above. To find this out, I've removed referenced to their Angular Modules and c[...]



DDD & Co., Teil 4: Aggregates

Wed, 26 Jul 2017 11:37:00 +0200

Nachdem in den vergangenen Folgen die Kommandos und fachlichen Ereignisse definiert und in einem begrenzten Kontext untergebracht wurden, fehlt als letztes wichtiges Konzept von DDD noch das Aggregat.



"Eine 100-Millionen-Dollar-Cross-Plattform-App mit ASP.NET Core, Angular und Electron"

Wed, 26 Jul 2017 10:00:00 +0200

Am 30.8. um 18:00 Uhr geht es bei der .NET User Group in Ratingen um Cross-Web- und Desktop-Anwendungen mit ASP.NET Core auf dem Server sowie TypeScript, Angular und dem Electron-Framework auf dem Client.



Creating an email form with ASP.NET Core Razor Pages

Wed, 26 Jul 2017 00:00:00 Z

In the comments of my last post, I got asked to write about, how to create a email form using ASP.NET Core Razor Pages. The reader also asked about a tutorial about authentication and authorization. I'll write about this in one of the next posts. This post is just about creating a form and sending an email with the form values. Creating a new project To try this out, you need to have the latest Preview of Visual Studio 2017 installed. (I use 15.3.0 preview 3) And you need .NET Core 2.0 Preview installed (2.0.0-preview2-006497 in my case) In Visual Studio 2017, use "File... New Project" to create a new project. Navigate to ".NET Core", chose the "ASP.NET Core Web Application (.NET Core)" project and choose a name and a location for that new project. In the next dialogue, you probably need to switch to ASP.NET Core 2.0 to see all the new available project types. (I will write about the other ones in the next posts.) Select the "Web Application (Razor Pages)" and pressed "OK". That's it. The new ASP.NET Core Razor Pages project is created. Creating the form It makes sense to use the contact.cshtml page to add the new contact form. The contact.cshtml.cs is the PageModel to work with. Inside this file, I added a small class called ContactFormModel. This class will contain the form values after the post request was sent. public class ContactFormModel { [Required] public string Name { get; set; } [Required] public string LastName { get; set; } [Required] public string Email { get; set; } [Required] public string Message { get; set; } } To use this class, we need to add a property of this type to the ContactModel: [BindProperty] public ContactFormModel Contact { get; set; } This attribute does some magic. It automatically binds the ContactFormModel to the view and contains the data after the post was sent back to the server. It is actually the MVC model binding, but provided in a different way. If we have the regular model binding, we should also have a ModelState. And we actually do: public async Task OnPostAsync() { if (!ModelState.IsValid) { return Page(); } // create and send the mail here return RedirectToPage("Index"); } This is an async OnPost method, which looks pretty much the same as a controller action. This returns a Task of IActionResult, checks the ModelState and so on. Let's create the HTML form for this code in the contact.cshtml. I use bootstrap (just because it's available) to format the form, so the HTML code contains some overhead:

Contact us

Einführung in Node, Folge 20: Praxisbeispiel

Mon, 24 Jul 2017 10:24:00 +0200

Die vergangenen Folgen haben unter anderem HTTP/2 und Streams vorgestellt und gezeigt, wie Codeanalyse und Tests in Node funktionieren. Doch wie lassen sich die einzelnen Bausteine zu einem großen Ganzen kombinieren? Ein Praxisbeispiel zeigt das Zusammenspiel.



Entity Framework Core Reverse Engineering mit dem Kommandozeilentool dotnet

Mon, 24 Jul 2017 10:00:00 +0200

Statt PowerShell-Commandlets lässt sich bei der Entwicklung von .NET-Core-Projekten auch das plattformübergreifend verfügbare Kommandozeilenwerkzeug dotnet aus dem .NET Core SDK einsetzen.



New Visual Studio Web Application: The ASP.NET Core Razor Pages

Mon, 24 Jul 2017 00:00:00 Z

I think, everyone who followed the last couple of ASP.NET Community Standup session heard about Razor Pages. Did you try the Razor Pages? I didn't. I focused completely on ASP.NET Core MVC and Web API. With this post I'm going to have a first look into it. I'm going to try it out. I was also a little bit skeptical about it and compared it to the ASP.NET Web Site project. That was definitely wrong. You need to have the latest preview on Visual Studio 2017 installed on your machine, because the Razor Pages came with ASP.NET Core 2.0 preview. It is based on ASP.NET Core and part of the MVC framework. Creating a Razor Pages project Using Visual Studio 2017, I used "File... New Project" to create a new project. I navigate to ".NET Core", chose the "ASP.NET Core Web Application (.NET Core)" project and I chose a name and a location for that project. In the next dialogue, I needed to switch to ASP.NET Core 2.0 to see all the new available project types. (I will write about the other one in the next posts.) I selected the "Web Application (Razor Pages)" and pressed "OK". Program.cs and Startup.cs I you are already familiar with ASP.NET core projects, you'll find nothing new in the Program.cs and in the Startup.cs. Both files look pretty much the same. public class Program { public static void Main(string[] args) { BuildWebHost(args).Run(); } public static IWebHost BuildWebHost(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup() .Build(); } The Startup.cs has a services.AddMvc() and an app.UseMvc() with a configured route: public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseBrowserLink(); } else { app.UseExceptionHandler("/Error"); } app.UseStaticFiles(); app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); } That means the Razor Pages are actually part of the MVC framework, as Damien Edwards always said in the Community Standups. The solution But the solution looks a little different. Instead of a Views and a Controller folders, there is only a Pages folder with the razor files in it. Even there are known files: the _layout.cshtml, _ViewImports.cshtml, _ViewStart.cshtml. Within the _ViewImports.cshtml we also have the import of the default TagHelpers @namespace RazorPages.Pages @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers This makes sense, since the Razor Pages are part of the MVC Framework. We also have the standard pages of every new ASP.NET project: Home, Contact and About. (I'm going to have a look at this files later on.) As every new web project in Visual Studio, also this project type is ready to run. Pressing F5 starts the web application and opens the URL in the browser: Frontend For the frontend dependencies "bower" is used. It will put all the stuff into wwwroot/bin. So even this is working the same way as in MVC. Custom CSS and custom JavaScript [...]