Subscribe: DotNetGerman Bloggers
http://blogs.dotnetgerman.com/mainfeed.aspx
Added By: Feedage Forager Feedage Grade A rated
Language: German
Tags:
angular  build  code  core  das  die  exec master  ist  master dbo  mit  net core  net  new  server  und  visual studio  von 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: DotNetGerman Bloggers

DotNetGerman Bloggers



Alle Blogs von DotNetGerman.com



Copyright: Copyright 2004-2014 DotNetGerman.com
 



DDD & Co., Teil 7: CQRS

Wed, 16 Aug 2017 20:29:00 +0200

Das Verarbeiten von Kommandos und Erzeugen der fachlichen Ereignisse ist seit der vergangenen Folge in semantisch sinnvollem Code abgebildet. Auf dem Weg lassen sich die Konsistenz und Integrität effizient sicherstellen, auch das Speichern der Ereignisse passt dank Event Sourcing konzeptionell dazu. Wie steht es um das Lesen der Daten?



IEC 61131-3: UNION erweitern per Vererbung

Wed, 16 Aug 2017 16:05:00 Z

In dem Post IEC 61131-3: Weitere Spracherweiterungen bin ich kurz auf die UNION eingegangen. Ein Leserkommentar hat mich auf die Möglichkeit hingewiesen, dass auch eine UNION per EXTENDS erweitert werden kann. Da dieses die Handhabung einer UNION vereinfacht und die Norm auch nicht darauf hinweist, will ich diese Möglichkeit in einem (sehr) kurzen Post vorstellen. […](image)



Announcing angular-oauth2-oidc, Version 2

Tue, 15 Aug 2017 00:00:00 +0100

Today, I've released a new version of the angular library angular-oauth2-oidc, which allows for implementing token-based Security using OAuth2/ OpenId Connect and JWTs with Angular.

This new major version comes with some breaking changes. You find a list at the beginning of the updated readme. I think they won't affect everyone and even when you are affected you should be able to deal with them quite quickly.

Silent Token Refresh

Silent Refresh was the most requested feature for this library. It is a standard compliant way to refresh your tokens when/ before they expire using implicit flow.

If the application is prepared for it, performing a silent refresh is as easy as this:

this
    .oauthService
    .silentRefresh()
    .then(info => console.debug('refresh ok', info))
    .catch(err => console.error('refresh error', err));

By leveraging the new events observable, an application can automatically perform such a refresh when/ sometime before the current tokens expire:

this
    .oauthService
    .events
    .filter(e => e.type == 'token_expires')
    .subscribe(e => {
        this.oauthService.silentRefresh();
    });

More information about this can be found within the updated readme.

Validating the signature of id_tokens

The library can now directly validate the signature of received id_tokens. For this, just assign a ValidationHandler:

import { JwksValidationHandler } from 'angular-oauth2-oidc';

[...]

this.oauthService.tokenValidationHandler = new JwksValidationHandler();

The JwksValidationHandler shown here uses the JavaScript library jsrasign to validate the signature directly in the browser without the need to call the server.

You can also hook in an own ValidationHandler by implementing an interface.

More Security Checks

Some additional security checks have been added. The library insists on using https now and only makes an exception for localhost. It also validates the received discovery document.

Feedback

If you are using it or if you are trying it out, don't hesitate to send me some feedback -- either directly via my blog or via GitHub.




WPF und MVVM richtig einsetzen – Teil 5

Mon, 14 Aug 2017 17:56:22 Z

Bindings gegen Mengen



Einführung in Node, Folge 23: Child-Prozesse

Mon, 14 Aug 2017 10:21:00 +0200

In zahlreichen Anwendungen besteht die Anforderung, andere Anwendungen als Child-Prozesse zu starten. Zu dem Zweck enthält Node das Modul child_process. Allerdings ist Vorsicht geboten, denn das Modul führt rasch zu plattformabhängigem Code. Wie lässt sich das Problem anders lösen?



Sommer Update der österreichischen PowerShell Community

Fri, 11 Aug 2017 06:00:00 Z

Auch im Sommer sind Roman und Patrick aktiv und haben für Euch die folgenden Infos gesammelt. Was also war im Juli los in der PowerShell Community in Österreich ? Kommende Events: 23. August 2017 – 25.8.2017 Experts Live Europe 1.   September 2017 - Kostenloser Einsteigerworkshop http://www.powershell.co.at/events/powershell-einsteiger-workshop-kostenlos-q32017/ 14. September 2017 Experts Live Café Wien – Anmeldung...



DDD & Co., Teil 6: Vom Modell zum Code

Wed, 09 Aug 2017 11:05:00 +0200

Die Domäne ist modelliert, gespeichert werden Events. Wie lässt sich das Ganze nun von der Theorie in die Praxis überführen? Wie könnte Code aussehen, der die fachliche Modellierung widerspiegelt? Ein Entwurf.



IEC 61131-3: Parameterübergabe per Parameterliste

Tue, 08 Aug 2017 19:25:00 Z

Parameterlisten sind eine interessante Variante, Parameter an SPS-Bibliotheken zu übergeben. Genaugenommen handelt es sich um globale Konstanten (VAR_GLOBAL CONSTANT), deren Initialisierungswerte im Library Manager editierbar sind. Bei der Deklaration von Arrays müssen dessen Grenzen Konstanten sein. Zum Zeitpunkt der Compilierung muss bekannt sein, wie groß das Array anzulegen ist. Der Versuch die Arraygrenzen durch eine […](image)



Visual Studio 2017 – Architektur & Code Analyse

Tue, 08 Aug 2017 18:23:19 Z

Visual Studio bietet umfangreiche Funktionen im Bereich Architektur und Code Analyse über die nicht häufig berichtet wird. Diese Funktionen sind aber ungerechtfertigter Weise "hinter dem Vorhang" und haben auch mit Visual Studio 2017 einige Neuerungen und Weiterentwicklung bekommen.   Echtzeitüberprüfung von Architekturabhängigkeiten: Der Editor überprüft während Änderungen im Code vorgenommen werden ob Architekturvorgaben verletzt werden....



Wie weiß man wann der Jausenwagen da ist?

Tue, 08 Aug 2017 16:57:05 Z

Gastartikel von unserem MVP Thomas Gölles, Fa. Solvion Hallo MoCaDeSyMo! Wie vermutlich in jedem Unternehmen in Österreich gibt es auch bei Solvion unterschiedliche Möglichkeiten zur Gestaltung der Mittagspause. Vom schnellen Weg zur Cafeteria bis zum bestellen bei unterschiedlichen Lieferservices spannt sich ein breiter Bogen verschiedener lukulischer Genüsse. Wir sind dann noch zusätzlich in der glücklichen...



Einführung in Node, Folge 22: CLI-Anwendungen

Tue, 08 Aug 2017 10:37:00 +0200

Node wird in der Regel als Laufzeitumgebung für Webanwendungen verwendet. Doch es eignet sich auch für andere Arten von Anwendungen wie Konsolenprogramme. Das wirft jedoch eine Reihe von Problemen auf, von der Analyse der Parameter bis zur Ausgabe der Hilfe. Wie lässt sich das bewerkstelligen?



Querying AD in SQL Server via LDAP provider

Mon, 07 Aug 2017 00:00:00 Z

This is kinda off-topic, because it's not about ASP.NET Core, but l really like it to share. I recently needed to import some additional user data via a nightly run into a SQL Server Database. The base user data came from a SAP database via an CSV bulk import. But not all of the data. E.g. the telephone numbers are maintained mostly by the users itself in the AD. After SAP import, we need to update the telephone numbers with the data from the AD. The bulk import was done with a stored procedure and executed nightly with an SQL Server job. So it makes sense to do the AD import with a stored procedure too. I wasn't really sure whether this works via the SQL server. My favorite programming languages are C# and JavaScript, and I'm not really a friend of T-SQL, but I tried it. I googled around a little bit and found a solution quick solution in T-SQL. The trick is map the AD via an LDAP provider as a linked server to the SQL Server. This can even be done via a dialogue, but I never got it running like this, so I chose the way to use T-SQL instead: USE [master] GO EXEC master.dbo.sp_addlinkedserver @server = N'ADSI', @srvproduct=N'Active Directory Service Interfaces', @provider=N'ADSDSOObject', @datasrc=N'adsdatasource' EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'ADSI',@useself=N'False',@locallogin=NULL,@rmtuser=N'\',@rmtpassword='*******' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'collation compatible', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'data access', @optvalue=N'true' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'dist', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'pub', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'rpc', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'rpc out', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'sub', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'connect timeout', @optvalue=N'0' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'collation name', @optvalue=null GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'lazy schema validation', @optvalue=N'false' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'query timeout', @optvalue=N'0' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'use remote collation', @optvalue=N'true' GO EXEC master.dbo.sp_serveroption @server=N'ADSI', @optname=N'remote proc transaction promotion', @optvalue=N'true' GO You can to use this script to set-up a new linked server to AD. Just set the right user and password to the second T-SQL statement. This user should have read access to the AD. A specific service account would make sense here. Don't save the script with the user credentials in it. Once the linked server is set-up, you don't need this script anymore. This setup was easy. The most painful part, was to setup a working query. SELECT * FROM OpenQuery ( ADSI, 'SELECT cn, samaccountname, mail, mobile, telephonenumber, sn, givenname, co, company FROM ''LDAP://DC=company,DC=domain,DC=controller'' WHERE objectClass = ''User'' and co = ''Switzerland'' ') AS tblADSI WHERE mail IS NOT NULL AND (telephonenumber IS NOT NULL OR mobile IS NOT NULL) ORDER BY cn Any error in the Query to execute resulted in an generic error message, which told me that there was an problem to built this query. Not really helpful. It took me two hours to find the right LDAP connection string and some more hours to find the right properties the query. The other painful thing are the conditions. Because the where clause outside [...]



DDD & Co., Teil 5: Event Sourcing

Wed, 02 Aug 2017 09:45:00 +0200

Wer eine Anwendung mit DDD modelliert, braucht für die Implementierung einen Ansatz zum Umsetzen der Persistenz. Dafür lässt sich eine herkömmliche CRUD-Datenbank verwenden, doch es gibt einen besseren Ansatz: Event Sourcing. Wie funktioniert das?



How to convert .crt & .key files to a .pfx

Mon, 31 Jul 2017 23:15:00 Z

The requirements are simple: You will need the .cer with the corresponding .key file and need to download OpenSSL.

If you are using Windows without the awesome Linux Subsystem, take the latest pre-compiled version for Windows from this site

Otherwise with Bash on Windows you can just use OpenSSL via its “native” environment. Thanks for the hint @kapsiR

After the download run this command:

   openssl pkcs12 -export -out domain.name.pfx -inkey domain.name.key -in domain.name.crt

This will create a domain.name.pfx. As far as I remember you will be asked to set a password for the generated private .pfx part.

If you are confused with .pfx, .cer, .crt take a look at this nice description blogpost.

Hope this helps!

(image)



Einführung in Node, Folge 21: Daten verschlüsseln

Mon, 31 Jul 2017 10:32:00 +0200

Das Verschlüsseln von Daten gewinnt zunehmend an Bedeutung. Node greift zu dem Zweck auf OpenSSL zurück und unterstützt sowohl die symmetrische als auch die asymmetrische Verschlüsselung. Darüber hinaus weiß es auch mit digitalen Signaturen, Hashfunktionen & Co. umzugehen. Wie funktioniert das?



The Angular Bundle Optimizer under the Hoods

Thu, 27 Jul 2017 00:00:00 +0100

p a {text-decoration: underline} Thanks to Filipe Silva who reviewed this article and to Rob Wormald for a lot of insights regarding this technology. In my last article, I've shown that the Angular Build Optimizer transforms the emitted JavaScript Code to make tree shaking more efficient. To demonstrate this, I've created a simple scenario that includes two modules of Angular Material without using them. After using the Bundle Optimizer, the CLI/ webpack was able to reduce the bundle size by about the half leveraging tree shaking: If you are wondering how such amazing results are possible, you can find some answers in this article. Please note that when writing this the Angular Build Optimizer is still experimental. Nethertheless, the results show above are very promising. Tree Shaking and Side Effects The CLI uses webpack for the build process and to make tree shaking possible, webpack marks exports that are not used and can therefore be safely excluded. In addition to this, a typical webpack configuration uses UglifyJS for removing these exports. Uglify tries to be on the safe side and does not remove any code that could be need at runtime. For instance, when Uglify finds out that some code could produce side effects, it keeps it in the bundle. Look at the following (artificial and obvious) example that demonstrates this: (function(exports){ exports.pi = 4; // Let's be generous! })(...); Unfortunately, when transpiling ES2015+/TypeScript classes down to ES5, the class declaration results in imperative code and UglifyJS as well as other tools cannot make sure that this code isn't producing side effects. A good discussion regarding this can be found here on GitHub. That's why the code in question stays in the bundle even though it could be removed. To assure myself about this fact, I've created a simple npm based Angular Package using the Jurgen Van de Moere's Yeoman generator for Angular libraries as well as a CLI based application that references it. The package's entry point exports an Angular module with an UnusedComponent that -- as it's name implies -- isn't used by the application. It also exports an UnusedClass. In addition to that, it exports an other UnusedClass as well as an UsedClass from the same file (ES Module). export {UnusedClass} from './unused'; export {OtherUnusedClass, UsedClass} from './partly-used'; export {SampleComponent} from './sample.component'; export {UnusedComponent} from './unused.component'; @NgModule({ imports: [ CommonModule ], declarations: [ SampleComponent, UnusedComponent ], exports: [ SampleComponent, UnusedComponent ] }) export class SampleModule { } When building the whole application without the Angular Build Optimizer, none of the unused classes are tree shaken off. One reason for this is that -- as mentioned above -- Uglify cannot make sure that the transpiled classes don't introduce side effects. Marking pure Code Blocks To compensate for the shown issue, the Angular Build Optimizer marks transpiled class declarations that are not producing side effects with a special /*@__PURE__*/ comment. UglifyJS on the other side respects those comments and removes such code blocks if not referenced. Using this technique, we can get rid of unused classes but not of the unused component. The reason for this is, as the next section shows, Angular's module system. Removing Angular Decorators As the NgModule-Decorator defines an Array with its Components, Directives, Pipes and Services, there is always a reference to them when importing the module. This prevents tools from tree shaking unused building blocks off. But we are lucky, because after A[...]



Shrinking Angular Bundles with the Angular Build Optimizer

Wed, 26 Jul 2017 17:00:00 +0100

p a { text-decoration: underline; } Thanks to Filipe Silva who reviewed this article and to Rob Wormald for a lot of insights regarding this technology. Also thanks to Sander Elias who gave important feedback. Beginning with version 1.3.0-rc.0, the Angular CLI makes use of the Angular Build Optimizer. This is a nifty tool that transforms the emitted JavaScript code to make tree shaking more efficient. This can result in huge improvements regarding bundle size. In this post I'm describing some simple scenarios that show the potential of this newly introduced tool. If you want to reproduce these scenarios, you find the source code used in my GitHub repository. Please note that the Angular Build Optimizer was still experimental when writing this. Therefore, it is subject to change. Nevertheless, as shown below, it comes with a high potential for shrinking bundles. Scenario 1 To demonstrate the power of the Angular Build Optimizer, I'm using a simple Angular Application I've created with the Angular CLI. After scaffolding, I've added the MdButtonModule and the MdCheckboxModule from Angular Material as well as Angular's NoopAnimationsModule and FormsModule: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppComponent } from './app.component'; import { MdButtonModule, MdCheckboxModule } from '@angular/material'; import { NoopAnimationsModule } from '@angular/platform-browser/animations'; import { FormsModule } from "@angular/forms"; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, FormsModule, NoopAnimationsModule, MdButtonModule, MdCheckboxModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Please note that I'm using none of the added modules. I've just added them to find out how good the CLI/ webpack is in combination with the Angular Build Optimizer when it comes to shake them off. After this, I've created a production build without using the Build Optimizer and another one using it. For the latter one, I've leveraged the new -bo command line option: ng build --prod -bo The results of this are amazing: As this figure shows, using the Angular Build Optimizer the bundle size after tree shaking is about the half. This fits to my observations I've written down here some months ago: There are situations that prevent tree shaking implementations from doing their job as well as they can. The Build Optimizer seems to compensate this. Scenario 2 After this, I've added one component from each of the two included Angular Material modules to find out whether this is influencing tree shaking: Checked This led to the following results: Of course, both bundles are bigger, because now I'm using more parts of the bundles included. But, as before, using the Angular Build Optimizer our Bundles are about half as big. Scenario 3 Perhaps you are wondering what's the overhead introduced by the two Angular Material modules in the scenarios above. To find this out, I've removed referenced to their Angular Modules and created two more build -- one with and one without the Angular Bundle Optimizer: Compared to Scenario 1 this shows that when using the Build Optimizer, it is possible to shake off most parts of the Material Design Modules when they are imported but not used. Current Limitations As mentioned above, when writing this the Angular Build Optimizer was still experi[...]



DDD & Co., Teil 4: Aggregates

Wed, 26 Jul 2017 11:37:00 +0200

Nachdem in den vergangenen Folgen die Kommandos und fachlichen Ereignisse definiert und in einem begrenzten Kontext untergebracht wurden, fehlt als letztes wichtiges Konzept von DDD noch das Aggregat.



"Eine 100-Millionen-Dollar-Cross-Plattform-App mit ASP.NET Core, Angular und Electron"

Wed, 26 Jul 2017 10:00:00 +0200

Am 30.8. um 18:00 Uhr geht es bei der .NET User Group in Ratingen um Cross-Web- und Desktop-Anwendungen mit ASP.NET Core auf dem Server sowie TypeScript, Angular und dem Electron-Framework auf dem Client.



Creating an email form with ASP.NET Core Razor Pages

Wed, 26 Jul 2017 00:00:00 Z

In the comments of my last post, I got asked to write about, how to create a email form using ASP.NET Core Razor Pages. The reader also asked about a tutorial about authentication and authorization. I'll write about this in one of the next posts. This post is just about creating a form and sending an email with the form values. Creating a new project To try this out, you need to have the latest Preview of Visual Studio 2017 installed. (I use 15.3.0 preview 3) And you need .NET Core 2.0 Preview installed (2.0.0-preview2-006497 in my case) In Visual Studio 2017, use "File... New Project" to create a new project. Navigate to ".NET Core", chose the "ASP.NET Core Web Application (.NET Core)" project and choose a name and a location for that new project. In the next dialogue, you probably need to switch to ASP.NET Core 2.0 to see all the new available project types. (I will write about the other ones in the next posts.) Select the "Web Application (Razor Pages)" and pressed "OK". That's it. The new ASP.NET Core Razor Pages project is created. Creating the form It makes sense to use the contact.cshtml page to add the new contact form. The contact.cshtml.cs is the PageModel to work with. Inside this file, I added a small class called ContactFormModel. This class will contain the form values after the post request was sent. public class ContactFormModel { [Required] public string Name { get; set; } [Required] public string LastName { get; set; } [Required] public string Email { get; set; } [Required] public string Message { get; set; } } To use this class, we need to add a property of this type to the ContactModel: [BindProperty] public ContactFormModel Contact { get; set; } This attribute does some magic. It automatically binds the ContactFormModel to the view and contains the data after the post was sent back to the server. It is actually the MVC model binding, but provided in a different way. If we have the regular model binding, we should also have a ModelState. And we actually do: public async Task OnPostAsync() { if (!ModelState.IsValid) { return Page(); } // create and send the mail here return RedirectToPage("Index"); } This is an async OnPost method, which looks pretty much the same as a controller action. This returns a Task of IActionResult, checks the ModelState and so on. Let's create the HTML form for this code in the contact.cshtml. I use bootstrap (just because it's available) to format the form, so the HTML code contains some overhead:

Contact us




Entity Framework Core Reverse Engineering mit dem Kommandozeilentool dotnet

Mon, 24 Jul 2017 10:00:00 +0200

Statt PowerShell-Commandlets lässt sich bei der Entwicklung von .NET-Core-Projekten auch das plattformübergreifend verfügbare Kommandozeilenwerkzeug dotnet aus dem .NET Core SDK einsetzen.



New Visual Studio Web Application: The ASP.NET Core Razor Pages

Mon, 24 Jul 2017 00:00:00 Z

I think, everyone who followed the last couple of ASP.NET Community Standup session heard about Razor Pages. Did you try the Razor Pages? I didn't. I focused completely on ASP.NET Core MVC and Web API. With this post I'm going to have a first look into it. I'm going to try it out. I was also a little bit skeptical about it and compared it to the ASP.NET Web Site project. That was definitely wrong. You need to have the latest preview on Visual Studio 2017 installed on your machine, because the Razor Pages came with ASP.NET Core 2.0 preview. It is based on ASP.NET Core and part of the MVC framework. Creating a Razor Pages project Using Visual Studio 2017, I used "File... New Project" to create a new project. I navigate to ".NET Core", chose the "ASP.NET Core Web Application (.NET Core)" project and I chose a name and a location for that project. In the next dialogue, I needed to switch to ASP.NET Core 2.0 to see all the new available project types. (I will write about the other one in the next posts.) I selected the "Web Application (Razor Pages)" and pressed "OK". Program.cs and Startup.cs I you are already familiar with ASP.NET core projects, you'll find nothing new in the Program.cs and in the Startup.cs. Both files look pretty much the same. public class Program { public static void Main(string[] args) { BuildWebHost(args).Run(); } public static IWebHost BuildWebHost(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup() .Build(); } The Startup.cs has a services.AddMvc() and an app.UseMvc() with a configured route: public void Configure(IApplicationBuilder app, IHostingEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseBrowserLink(); } else { app.UseExceptionHandler("/Error"); } app.UseStaticFiles(); app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); } That means the Razor Pages are actually part of the MVC framework, as Damien Edwards always said in the Community Standups. The solution But the solution looks a little different. Instead of a Views and a Controller folders, there is only a Pages folder with the razor files in it. Even there are known files: the _layout.cshtml, _ViewImports.cshtml, _ViewStart.cshtml. Within the _ViewImports.cshtml we also have the import of the default TagHelpers @namespace RazorPages.Pages @addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers This makes sense, since the Razor Pages are part of the MVC Framework. We also have the standard pages of every new ASP.NET project: Home, Contact and About. (I'm going to have a look at this files later on.) As every new web project in Visual Studio, also this project type is ready to run. Pressing F5 starts the web application and opens the URL in the browser: Frontend For the frontend dependencies "bower" is used. It will put all the stuff into wwwroot/bin. So even this is working the same way as in MVC. Custom CSS and custom JavaScript ar in the css and the js folder under wwwroot. This should all be familiar for ASP.NET Corer developers. Also the way the resources are used in the _Layout.cshtml are the same. Welcome back "Code Behind" This was my first thought for just a second, when I saw the that there are nested files under the Index, About, Co[...]



Buch: "Moderne Datenzugriffslösungen mit Entity Framework Core 1.x und 2.0: Datenbankprogrammierung mit .NET/.NET Core und C#"

Thu, 20 Jul 2017 08:21:00 +0200

Mein Buch ist in einer neuen Auflage erschienen. Es ist auf rund 300 Seiten angewachsen und behandelt neben Version 1.1.2 auch schon die zweite Hauptversion des neuen objektrelationalen Mappers von Microsoft.



DDD & Co., Teil 3: Commands und Events

Wed, 19 Jul 2017 10:28:00 +0200

Das Ergebnis der vergangenen Folge war eine Definition der fachlichen Ereignisse für eine To-do-Liste. Verursacht werden sie in der Regel durch Aktionen des Anwenders, die seine Absichten und Wünsche ausdrücken. Wie lässt sich das modellieren?



Einführung in Node, Folge 19: Dokumentation schreiben

Mon, 17 Jul 2017 11:27:00 +0200

Das Schreiben von Dokumentation gehört zu den eher wenig geliebten Aufgaben von Entwicklern. Trotzdem ist das Thema wichtig, denn eine gute Dokumentation erleichtert die Arbeit mit einer Komponente dramatisch. Wie funktioniert das im Kontext von Node?



Microsoft gibt Roadmap für PowerShell Core 6.0 bekannt

Mon, 17 Jul 2017 09:57:00 +0200

PowerShell Core 6.0 soll erscheinen, wenn weitgehende Kompatibilität zur Windows PowerShell 5.1 erreicht ist. Gleichzeitig besiegelt Microsoft in der Roadmap das Ende der klassischen Windows PowerShell.



LightCore is back - LightCore 2.0

Mon, 17 Jul 2017 00:00:00 Z

Until now it is pretty hard to move an existing more complex .NET Framework library to .NET Core or .NET Standard 1.x. More complex in my case just means, e. g. that this specific library uses reflection a little more than maybe some others. I'm talking about the LightCore DI container. I started to move it to .NET Core in November 2015, but gave up halve a year later. Because it needs much more time to port it to NET Core and because it makes much more sense to port it to the .NET Standard than to .NET Core. And the announcement of .NET Standard 2.0 makes me much more optimistic to get it done with pretty less effort. So I stopped moving it to .NET Core and .NET Standard 1.x and was waiting for .NET standard 2.0. LightCore is back Some weeks ago the preview version of .NET Standard 2.0 was announced and I tried it again. It works as expected. The API of .NET Standard 2.0 is big enough to get the old source of LightCore running. Also Rick Strahl did some pretty cool and detailed post about it: Upgrading to .NET Core 2.0 Preview Multi-Targeting and Porting a .NET Library to .NET Core 2.0 The current status I created .NET Standard 2.0 libraries for the most of the projects. I didn't change most parts of the code. The XAML configuration stuff was an exception. I also ported the existing unit tests to .NET Core Xunit test libraries and the tests are running. That means LightCore is definitely running with .NET Core 2.0. We'll get one breaking change: I moved the file based configuration from XAML to JSON, because the XAML serializer is not (maybe not yet) supported in .NET Standard. We'll get another breaking change: I don't want to support Silverlight anymore. Silverlight users should use the old packages instead. Sorry about that. Roadmap I don't really want to release the new version until the .NET Standard 2.0 is released. I don't want to have the preview versions of the .NET Standard libraries referenced in a final version of LightCore. That means, there is still some time to get the open issues done. 2.0.0-preview1 End of July 2017 Uses the preview versions of .NET Standard 2.0 and .NET Core 2.0 2.0.0-preview2 End of August 2017 Uses the preview versions of .NET Standard 2.0 and .NET Core 2.0. Maybe the finals, if they are released until that. 2.0.0 September 2017 depends on the release of .NET Standard 2.0 and .NET Core 2.0 Open issues The progress is good, but there is still something to do, before the release of the first preview: We need to have the same tests in .NET Framework based unit tests to be sure https://github.com/JuergenGutsch/LightCore/issues/4 We need to finalize the ASP.NET integration package to get it running again in ASP.NET 4.x Should be small step, because it is almost done: The WebAPI integration and the sample application is needed. https://github.com/JuergenGutsch/LightCore/issues/4 We need to complete the ASP.NET Core integration package to get it running in ASP.NET Core Needs some more time, because it needs a generic and proper way to move the existing service registrations from the ASP.NET Core's IServiceCollection to LightCore https://github.com/JuergenGutsch/LightCore/issues/6 Continuous integration and deployment using AppVeyor should be set-up https://github.com/JuergenGutsch/LightCore/issues/2 https://github.com/JuergenGutsch/Li[...]



.NET Stammtisch in Linz

Fri, 14 Jul 2017 09:10:48 Z

Der .NET Stammtisch in Linz hat sein zweites Treffen angekündigt: Event-URL (mit Details): https://www.facebook.com/events/1003694579733452/ Termin: Mittwoch, 26. Juli um 18:30 Location: Coworking Cube, Obere Landstr. 19 Pucking, Oberosterreich, Austria Rainer Stropek spricht über "Dockerizing .NET Apps" .NET und C# sind für sich schon tolle Werkzeuge, um moderne Softwareprodukte umzusetzen. Gibt man aber noch ein wenig...



Directly upgrading from AngularJS 1.x to Angular without preparing the existing Code Base

Fri, 14 Jul 2017 00:00:00 +0100

p a {text-decoration: underline; } When upgrading from AngularJS 1.x to Angular (2/4/5 etc.), we usually prepare our AngularJS 1.x code base first: This can involve leveraging new AngularJS 1.x techniques like components. Additionally, introducing TypeScript as well as module loaders like SystemJS or webpack are further tasks to prepare the existing code. The goal behind this is to draw near Angular in order to allow a better integration. But in some situations preparing the existing code is too costly. For instance, think about situations where we just want to write new parts of the application with Angular without the need to modify much of the existing AngularJS 1.x code. When this holds true for your project, skipping the preparation phase could be a good idea: This post shows step by step, how this approach can be accomplished. Like the official and well written upgrading tutorial which includes preparing the code base, it upgrades the popular AngularJS 1.x Phone Catalog Sample. Even though this sample leverages AngularJS components introduced with AngularJS 1.5, everything shown here also works with more "traditional" AngularJS code using controllers and directives. The whole sample can be found in my GitHub repository. In order to make following everything easier, I've also created one commit for each of the steps described here. Step 1: Creating the new Angular Application As a starting point this article assumes that we are scaffolding a new Angular application with the Angular CLI: ng new migrated To make the structure of this new solution clearer, create one folder for the existing AngularJS code and another one for the new Angular code within src. In the sample presented here, I've used the names ng1 and ng2: After this, move every generated file except the shown tsconfig.app.json, tsconfig.spec.json, favicon.ico and index.html into the ng2 folder. To inform the CLI's build task about this new structure, adopt the file .angular-cli.json. Using the assets section in this file, we can also tell the CLI to directly copy the ng1 folder to the output directory: { "$schema": "./node_modules/@angular/cli/lib/config/schema.json", "project": { "name": "migrated" }, "apps": [ { "root": "src", "outDir": "dist", "assets": [ "ng1", "assets", "favicon.ico" ], "index": "index.html", "main": "ng2/main.ts", "polyfills": "ng2/polyfills.ts", "test": "ng2/test.ts", "tsconfig": "tsconfig.app.json", "testTsconfig": "tsconfig.spec.json", "prefix": "app", "styles": [ "ng2/styles.css" ], "scripts": [], "environmentSource": "ng2/environments/environment.ts", "environments": { "dev": "ng2/environments/environment.ts", "prod": "ng2/environments/environment.prod.ts" } } ], "e2e": { "protractor": { "config": "./protractor.conf" } }, "lint": [ { "project": "tsconfig.app.json" }, { "project": "tsconfig.spec.json" }, { "project": "tsconfig.e2e.json" } ], "test": { "karma": { "config": "./karma.conf" } }, "defaults": { "styleExt": "css", "component": {} } } Now copy the whole AngularJS 1.x applica[...]



Folien zum Vortrag Lernen durch Üben

Thu, 13 Jul 2017 09:00:54 Z

Ich habe meine Folien zum Vortrag „Lernen durch Üben“, den ich auf der DWX 2017 gehalten habe, zu Speakerdeck hochgeladen.    

Der Beitrag Folien zum Vortrag Lernen durch Üben erschien zuerst auf Refactoring Legacy Code.




Was bringt PowerShell Core 6.0?

Tue, 11 Jul 2017 11:40:00 +0200

PowerShell Core ist die plattformneutrale Variante der Windows PowerShell. Sie läuft nicht nur auf Windows, sondern auch Linux und MacOS.



Folien zum Vortrag Nextlevel Clean Code Development

Tue, 11 Jul 2017 07:40:39 Z

Die Folien zu meinem Vortrag „Nextlevel Clean Code Development“ sind online bei Speakerdeck zu finden.    

Der Beitrag Folien zum Vortrag Nextlevel Clean Code Development erschien zuerst auf Refactoring Legacy Code.




Interview von der DWX17 zu Nextlevel Clean Code Developer

Mon, 10 Jul 2017 09:18:32 Z

Während der DWX 2017 in Nürnberg hat Tilman Börner, Chefredakteur der dotnetpro, mich interviewt. Im Interview plaudern wir über die Entwicklung der Clean Code Developer Initiative.    

Der Beitrag Interview von der DWX17 zu Nextlevel Clean Code Developer erschien zuerst auf Refactoring Legacy Code.




Technische Schulden verfolgen mit NDepend 2017.2

Mon, 10 Jul 2017 06:24:09 Z

NDepend nutzte ich mittlerweile häufig zur Analyse meiner Anwendungen. Die statische Codeanalyse liefert zwar kein vollständiges Bild zum Zustand einer Anwendung, aber man erhält genügend Informationen um Probleme frühzeitig zu erkennen. Die früheren Versionen hatte ich bereits hier und hier beschrieben. In der neusten Version kommt nun das Thema der technischen Schulden grosse Beachtung. Ein … Technische Schulden verfolgen mit NDepend 2017.2 weiterlesen(image)



Three times in a row

Mon, 10 Jul 2017 00:00:00 Z

On July 1st, I got the email from the Global MVP Administrator. I got the MVP award the third time in a row :)

I'm pretty proud about that and I'm happy to be part of the great MVP community one year more. I'm also looking forward to the Global MVP Summit in March to meet all the other MVPs from around the world.

(image)

Not really a fan-boy...?

I'm also proud of that, because I don't really call me a Microsoft fan-boy. And sometimes, I also criticize some tools and platforms built by Microsoft (feel like a bad boy). But I like most of the development tools build by Microsoft and I like to use the tools, and frameworks and I really like the new and open Microsoft. The way how Microsoft now supports more than its own technologies and platforms. I like using VSCode, Typescript and Webpack to create NodeJS applications. I like VSCode and .NET Core on Linux to build Applications on a different platform than Windows. I also like to play around with UWP Apps on Windows for IoT on a Raspberry PI.

There are much more possibilities, much more platforms, much more customers to reach, using the current Microsoft development stack. And this is really fun to play with it, to use it in real project, to write about it in .NET magazines, in this blog and to talk about it in the user groups and on conferences.

Thanks

But I wouldn't get honored again without such a great development community. I wouldn't continue to contribute to the community without that positive feedback and without that great people. This is why the biggest "Thank You" goes to the development community :)

Sure, I also need to say "Thank You" to my great family (my lovely wife and my three kids) which supports me in spending som much time to contribute to the community. I also need to say Thanks to my company and my boss for supporting me and allowing me to use parts of my working time to contribute the the community.

(image)



Webpack 2 Parameterübergabe and den Buildprozess auf der Konsole

Fri, 07 Jul 2017 21:00:11 Z

Die meisten Beispiele für Webpack enthalten fertige Konfigurationen die als npm scripte hinterlegt sind. Wenn man aber einen automatisierten Build Prozess verwendet z.B. über den TFS, dann möchte man evtl. einige Variablen von außen an den Buildprozess übergeben. Dazu könnte z.B. die “baseUrl” gehören, wenn die Anwendung nicht im Root des Webserver gehostet wird. Wenn […](image)



CSS IntelliSense in Visual Studio mit Angular und Resharper

Fri, 07 Jul 2017 17:37:31 Z

Manchmal könnte man sagen “früher” war alles besser. Zumindest war es mit Angular 1 und dem “alten” System in dem man noch direkt JavaScript und Styles eingebunden hat, einfacher im HTML die passende IntelliSense für die CSS Klassen mit der Hilfe von Resharper zu erhalten. Dabei handelte es sich z.B. um Bootstrap und font awesome […](image)



Angular Fundamentals Workshop in Karlsruhe

Thu, 06 Jul 2017 11:22:19 Z

Zur Unterstützung des NOSSUED Software Entwicklungs Open Space bieten wir den ersten 2 Teilnehmern, die uns eine Kopie ihres Tickets an schulung@co-IT.eu zukommen lassen, die Teilnahme an unserem 3-tägigen Angular 4 Fundamentals Workshop kostenlos an. Das Training ist vom 11-13. Juli in unseren modernen, klimatisierten Büros mit höhenverstellbaren Tischen im Herzen von Karlsruhe. Unser Firmenprofil verrät euch mehr. …(image)



#NOSSUED SOFTWARE ENTWICKLUNGS OPEN SPACE 2017

Thu, 06 Jul 2017 08:47:56 Z

Es ist mal wieder so weit: Zum achten Mal findet #nossued, der Software Open Space der .NET Usergroup in Karlsruhe statt. Termin ist der 14-16 Juli. Zur Anmeldung geht es hier. Der Teilnahmebeitrag kann wie immer frei gewählt werden. Impressionen der letzten Open Spaces findet ihr in meinem Blog. Meinen Artikel zum Open Space Format hat die dotnetpro freundlicherweise …(image)



DDD & Co., Teil 2: Semantik statt CRUD

Wed, 05 Jul 2017 09:42:00 +0200

CRUD ist nicht die Antwort auf alle Fragen. Der größte Kritikpunkt ist die auf vier Verben begrenzte Semantik: Fachexperten und Anwender denken nicht in CREATE, READ, UPDATE und DELETE. Welche Wörter verwenden sie stattdessen?



DWX2017: Inhalte zum meinen Sessions

Mon, 03 Jul 2017 17:17:00 Z

(image)

Wie jedes Jahr fand auch in diesem Jahr wieder mit der Developer Week die größte deutsche Entwicklerkonferenz im NCC Ost in Nürnberg statt. An dieser Stelle möchte ich nun alle Inhalte zu meinen Sessions und Workshops zum Download anbieten. Im Archiv enthalten sind alle Inhalte, die während meiner Session erzeugt wurden, also Folien, Notizen in OneNote sowie alle Projekte aus Visual Studio (2017).

Sessions

Workshops
Wichtiger Hinweis: Das Archiv zum Workshop ist mit einem Passwort versehen, welches allen Teilnehmern während dem Workshop bekannt gegeben wurde. Falls es jemand vergessen haben sollte, kann dies per Email erfragt werden.

Ich möchte mich auf diesem Wege noch einmal bei allen Teilnehmern meiner Sessions und des Workshops bedanken, es war wirklich wieder eine wahnsinnig tolle und spaßige Veranstaltung.




Autoscaling Azure Web Apps

Mon, 03 Jul 2017 08:48:36 Z

Recent changes in Azure brought some significant changes in autoscaling options for Azure Web Apps (i.e. Azure App Service to be precise as scaling happens on App Service plan level and has effect on all Web Apps running in that App Service plan). This article describes how you can configure basic autoscaling based on CPU...



DevOps – technische Ressourcen

Mon, 03 Jul 2017 08:16:22 Z

Es gibt eine neue Bibliothek rund um das Thema DevOps. Ihr findet dort alles was ihr braucht, um möglichst schnell von 0 auf DevOps zu kommen. Auf der Seite wird sowohl die strategische Bedeutung von DevOps für die Software-Entwicklung erläutert, … Weiterlesen



PowerShell-Module in einem eigenen Unternehmens-Repository verwalten

Fri, 30 Jun 2017 12:06:00 +0200

Zum Schutz gegen Schadsoftware können Organisationen eigene PowerShell-Modul-Repositories mit solchen PowerShell-Modulen erstellen, die geprüft und freigegeben sind.



Non-cryptographic hash functions for .NET

Fri, 30 Jun 2017 00:15:00 Z

Creating hashs is quite common to check if content X has changed without looking at the whole content of X. Git for example uses SHA1-hashs for each commit. SHA1 itself is a pretty old cryptographic hash function, but in the case of Git there might have been better alternatives available, because the “to-be-hashed” content is not crypto relevant - it’s just content marker. Well… in the case of Git the current standard is SHA1, which works, but a ‘better’ way would be to use non-cryptographic functions for non-crypto purposes.

Why you should not use crypto-hashs for non-crypto

I discovered this topic via a Twitter-conversation and it started with this Tweet:

The author of this awesome package is Brandon Dahler, who created .NET versions of the most well known algorithm and published them as NuGet packages.

The source and everything can be found on GitHub.

Lessons learned

If you want to hash something and it is not crypto relevant, then it would be better to look at one of those Data.HashFunctions - some a pretty crazy fast.

I’m not sure which one is ‘better’ - if you have some opinions please let me know. Brandon created a small description of each algorithm on the Data.HashFunction documentation page.

(my blogging backlog is quite long, so I needed 6 month to write this down ;) )

(image)



DDD & Co., Teil 1: Was an CRUD falsch ist

Wed, 28 Jun 2017 10:13:00 +0200

Die Anwendung TodoMVC ist das "Hallo Welt" des Web: Was ursprünglich als reiner Vergleich von verschiedenen UI-Frameworks gedacht war, ist inzwischen zu einem eigenen Ökosystem gereift. Wenn man die Anwendung serverseitig umsetzen will, warum dann nicht mit Domain-Driven Design (DDD)?



ibiola – Mobilität neu erFAHREN!

Tue, 27 Jun 2017 08:14:07 Z

Möchten Sie Ihren Mitarbeitern ein Mehr an Mobilität bieten und dabei Ihre Fuhrparkkosten um 35% reduzieren? Dann ist Corporate Carsharing das Richtige für Sie. Und wenn alles einfach laufen soll, dann ist ibiola® die richtige Lösung! Was bereitet Fuhrparkmanagern Kopfzerbrechen? Unausgelastete Flotte (hohe Stillstandzeiten) - Sie zahlen zu hohe Fixkosten - jede Stunde, jeden Tag...






Xamarin Exception: Could not load file or assembly mono.posix beim Debugging

Mon, 26 Jun 2017 14:03:38 +0200

Nach einem der letzten Updates zu Visual Studio 2017.2 erhielt ich folgende Fehlermeldung beim Debugging einer Android App: EXCEPTION: Mono.Debugging.Soft.DisconnectedException: The connection with the debugger has been lost. The target application may have exited. ---> System.IO.FileNotFoundException: Can not load 'Mono.Posix, Version=2.0.0.0, Culture=neutral, PublicKeyToken=0738eb9f132ed756' or its dependencies. File not found. at Mono.Debugging.Soft.SoftDebuggerSession.ResolveSymbolicLink(String path) at Mono.Debugging.Soft.SoftDebuggerSession.PathsAreEqual(String p1, String p2) at Mono.Debugging.Soft.SoftDebuggerSession.FindLocationByMethod(MethodMirror method, String file, Int32 line, Int32 column, Boolean& insideTypeRange) at Mono.Debugging.Soft.SoftDebuggerSession.FindLocationByType(TypeMirror type, String file, Int32 line, Int32 column, Boolean& genericMethod, Boolean& insideTypeRange) at Mono.Debugging.Soft.SoftDebuggerSession.ResolveBreakpoints(TypeMirror type) at Mono.Debugging.Soft.SoftDebuggerSession.HandleTypeLoadEvents(TypeLoadEvent[] events) at Mono.Debugging.Soft.SoftDebuggerSession.HandleEventSet(EventSet es) at Mono.Debugging.Soft.SoftDebuggerSession.EventHandler() Eine Lösung des Problems war glücklicherweise schnell gefunden. Wie man auf der Xamarin Release Seite nachlesen kann, wird die Datei Mono.Posix.dll tatsächlich durch den Visual Studio Installer nicht mit intstalliert. Bis das Problem mit einem Update behoben ist hilft folgender Workarround, der aus dem obigen Link entnommen wurde: Download der fehlenden Datei Mono.Posix. Entpacken des Archivs In den Eigenschaften der Mono.Posix.dll prüfen ob die Datei durch Xamarin Inc. signiert wurde Datei ggfs. im Tab Allgmemein der Eigenschaften “Entblocken” Unter: Visual Studio 2017: Datei Mono.Posix.dll in das “Xamarin.VisualStudio” Erweiterungs-Verzeichnis kopieren. Bei mir lag das unter: C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\Extensions\Xamarin.VisualStudio Visual Studio beenden und neustarten falls es während des Kopiervorgangs lief. Das Problem sollte zum Glück nur temporär bestehen, da es im Xamarin Bugzilla bereits auf den Status gelöst gesetzt wurde. Mit Visual Studio 2017.3 sollte sich das Problem also erledigt haben. [...]