Subscribe: Hernan de Lahitte's blog
http://weblogs.asp.net/hernandl/rss.aspx
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
code  composite services  ctp  event  iacute  logging  message  new  patterns  project  semantic logging  service  services 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Hernan de Lahitte's blog

Hernan de Lahitte's blog



.NET Development from the trenches



 



SuuntoSummit

Wed, 08 Feb 2017 16:35:00 GMT

I live in Argentina and I've been using Suunto watches for some years now, in particular the Ambit family since the first version up to Ambit3 Peak and some variations of Sports models. All of them were amazing and super reliable in all conditions and enviroments where I used from my Nepal climbing and race trip (http://www.everestmarathon.com/article-Everest_Marathon_with_Island_Peak) with my Ambit 3 Peak. I also run in the Salomon Ultra Pirineu 110K with my current Spartan Ultra just purchased in September 2016 and sent some feedback about the usage to the Suunto HQ were thys took notes and added to the feature list and fixes. I never thought of changing brand simply because I think Suunto is the strongest and highest quality watch for the kind of extreme tests and races I do.   Accorfing to my experience with Suunto for running, I'm kind of a reference in my running team with +500 runners of all distances, and many of them follow my advices on which model to buy and when. I also bought an Ambit 3 Peak to our coach for his 50th birthday. The upcoming 2017 race season includes Transvulcania Ultra, Western States 100-Mile Endurance Run (as pacer), Tromso Skyrace, Bear 100 miles (Utah), and some other 50K races in Patagonia, Argentina my home country. I will continue to test the Spartan in these races and adventures and send feedback to make it the best mountain watch ever. In this regard, and because I work as a Software Engineer I was very interested in the Spartan platform evolution and how can I contribute to this so visiting the HQ would be a great chance to learn and perhaps give my humble feedback on this great gear. Looking forward for many upcoming dreamed experiences in this 2017 (visiting Suunto factory? yeah!!). width="773" height="435" src="https://www.youtube.com/embed/VclyRDuefAI" frameborder="0" allowfullscreen="allowfullscreen"> My Ambit 3 Peak to the test at Everest base Camp all the way to Namche and behond for 60 treacherous Km. Running the Everest Extreme 60K starting from Everest base Camp to Namche Bazaar. Tesating my new Spartan Ultra back on Spetember 2016 before racing at Ultra Pirineu, Spain. width="773" height="435" src="https://www.youtube.com/embed/D-AJm5sPVfM" frameborder="0" allowfullscreen="allowfullscreen"> El Chalten and the Fitz Roy Massif on the back, Patagonia Argentina Running Ultra Pirineu 2016 in September with my Spartan Ultra. It run out of battery after 19hs in 26h mode so I sent my feedback for fixing/enhancing battery life. Suunto tagged my picture in Instagram! My Suunto Ambit 3 Peak. Gettting ready for an intense 2017 race season with my current Spartan Ultra (picture from Ultra Pirineu 2016).   Visit my other social media for more info. Facebook (Sports page) Facebook (personal page) Instagram #suuntosummit[...]



Dia 14 - Viaje a Nepal 2016

Mon, 06 Jun 2016 03:41:00 GMT

DIA 14   Lugar: Everest Base Camp, 5340 mts. Actividad: Dia de la carrera, “Everest Extreme 60K” “Fuera de mi zona de confort es donde aprendo a conocer quién soy, donde puedo enfrentarme a mis límites y dejarlos atrás, donde puedo tomar las decisiones correctas o aprender de las erróneas”. Y llego finalmente el día. La “excusa perfecta” para cerrar este increíble y mágico viaje. Como las noches anteriores, sabía que le ganaría al despertador puesto a las 4:30 am. Justo cuando decidí levantarme el ruido de una avalancha cercana termino por despabilarme. La noche fue una batalla casi perdida por lograr un sueño decente arrollado en doble bolsa de dormir contra los -3C del interior de mi carpa según indicaba mi reloj. Los crac-crac esporádicos provenientes del hielo del glaciar debajo de todo el campamento ayudaron un poco al insomnio. El plan era simple, a las 5am servían algo liviano de comer, 5:45 presentarse en la línea de largada dispuesta directamente sobre el mismo hielo al pie de la infame e impactante Khumbu Ice fall o cascada de hielo, nombre muy adecuado para la gigantesca mole de hielo de cientos de metros de altura que lleva al Camp 1 del Everest. A las 6am en punto largamos los 10 extranjeros y 6 nepalíes (que en minutos dejamos de ver para siempre). Minutos antes entregamos nuestras camperas a los compañeros del grupo con que escalamos días antes uno de los 6 miles, y quiénes corrían los 42K una hora más tarde detrás de nosotros. De nuestro ecléctico grupo en su mayoría europeos, solo el inglés (a mi derecha) y yo corríamos la ultra. (Foto de la largada unos minutos antes de sacarnos las camperas) Los primeros 5K hasta el primer puesto en Gorak Shep eran poco corribles por entre el hielo y las rocas del glaciar como se ve en las fotos de abajo. De todos modos puede manejarme con cierta soltura gracias a que el dia anterior habíamos hecho una recorrida de una parte de este tramo para reconocer el terreno mas complicado el cual estaba a la salida del campamento. Una vez en Gorak Shep (como un pueblo del lejano oeste americano pero cambian caballos por yaks y las rocallosas por las grandes del Himalaya, ver foto) Cargo algo de agua y sigo al próximo largo tramo de unos 7K a Lobuche donde pasaríamos por excelentes vistas ya atravesando la cota de los 5000 en “suave” pero rocoso descenso con muchas subidas y bajadas. Cada tanto nos cruzábamos con pequeñas caravanas de yaks transitando por los lugares más complicados llevando importantes cargas como en la foto.   Pasando este segundo puesto solo de agua también, venia el tramo con el memorial por los caídos en el Everest y luego una gran bajada muy técnica hasta el siguiente puesto de hidratación. Al momento venía bien con la alimentación pero agotando rápidamente las provisiones ya que no había nada sustancioso en los puestos para recargar o sustituir.      Es ese punto la organización no dio detalles del tipo de comida con lo cual estaba supeditado a lo que yo traía. Si la cosa no mejoraba más adelante tendría que racionar asumiendo que en los puestos indicados con comida no habría mucho de utilidad. Pasadas las dos primeras horas de carrera y ya en la cota de 4500 por fin llegue a una planicie conocida donde varios días atrás durante el ascenso de aclimatación había salido a probar mi primera hora de corrida a esa altura y luego de estar dos semanas parado por el tema del dolor de la cintura. Este tramo pasó relativamente rápido y llego a Dingboche, poblado importante luego de namche y punto estratégico de la carrera. Gente alentando a la entrada del pueblo y logro en el puesto algun boc[...]



Semantic Logging Extensibility and Reactive Extensions (Rx)

Thu, 30 May 2013 21:59:00 GMT

If you already played with new Semantic Logging Application Block which I strongly recommend if you are using the new typed events in NET 4.5, perhaps you have noticed the wide extensibility provided in both in-process and out-of-process scenarios.  Not only you can write your own " sinks" to add new event repositories or listeners but also complex filtering capabilities with the Observables pattern that allow the use of Rx as shown in the new Quickstarts added in the Wave 2 release of Entlib v6. Let me show how you can beyond something that is by design like the event payload format (JSON) of the DB sinks and change that format (and any other event data) as you wish. Let's say that you set your DB sink like this:   this.dbListener = SqlDatabaseLog.CreateListener(...[all args here])   Now it would be great if this helper method provides an additional argument with some lambda that can dictate the way we format or set the data to be written. So writing a simple class similar to the provided helper method "CreateListener" and extending the event stream with the Rx Query Library package we can do something like this:           public static SinkSubscription LogToSqlDatabaseFormatted(this IObservable eventStream, [same args as 'LogToSqlDatabase'], Func payloadFormatter = null) {    var sink = new SqlDatabaseSink([same args]);    // using System.Reactive.Linq extension    IDisposable subscription = eventStream.Select(entry => ConvertToEventRecord(entry, payloadFormatter)).Subscribe(sink);    return new SinkSubscription(subscription, sink); }  In this case, 'ConvertToEventRecord' will simply map the properties between EventRecord and EventEntry along with the custom process for the payload property with 'payloadFormatter'. For more details and also an example of a custom sink with out-of-process implementation, you can download a sample solution (see Attachment at the bottom) that showcase this using NuGet packages for the required binaries but the Etw library which is included in the Semantic Logging Service [...]



Your best friend for authoring ETW strongly typed events

Thu, 02 May 2013 17:34:00 GMT

So you were reading some good tutorials on how to write your first typed event class but for some reason it may not work as expected? (Or many other issues as well)As I wrote in my last post about the new Entlib 6.0, there is helping class in the Semantic Logging Application Block named EventSourceAnalyzer that will help you check your class for multiple potential issues that may be hard to diagnose.Let's see how to write a very simple unit test that will do a full check of your class:[TestMethod]public void when_inspecting_myCompanyEventSource(){    EventSourceAnalyzer.InspectAll(MyCompanyEventSource.Log);}        As you can see, your custom class will implement a singleton (Log static method) so this instance will be used by the analyzer to perform many checks and throw an error if it finds something wrong or may not be well implemented according to the EventSource guideline.This class also have an instance usage where you can turn on/off some knobs for adjust the check behavior to some custom scenarios. On of the performed checks is the type mapping between the event argument types and the WriteEvent call inside your event implementation.A standard event may be defined as follow:[Event]public void GuidNotMapped(Guid id){    WriteEvent(1, id);}  Now in case you want to use on of the defined overloads of WriteEvent and avoid the casting to the params object[] overload described in section 5.6.2 in StronglyTypedEvents doc then you may have something like this: [Event]public void GuidMappedToString(Guid id){    WriteEvent(1, id.ToString());} So in this case, you may use on of the provided knobs described above and bypass the type mapping check: [TestMethod]public void when_inspecting_myCompanyEventSource_with_typeMapping_off(){    var analyzer = new EventSourceAnalyzer() { ExcludeWriteEventTypeMapping = true };    analyzer.Inspect(MyCompanyEventSource.Log);}I hope you can find the EventSourceAnalizer a good companion for your custom typed events.      [...]



Enterprise Library 6.0 is RTW (and Unity 3.0)

Fri, 26 Apr 2013 15:35:00 GMT

Finally the latest version of Enterprise Library 6.0 (and almost a decade of evolution from the first block).

A great summary of all the new features along with some insights can be found in this post from Grigori.

My plan is to write a series of post with some additional information and samples of the brand new Semantic Logging Application Block which leverage the new typed events in NET 4.5 and the high performance ETW tracing infrastructure.

So I guess that if you are new to typed events in v4.5 and want to start authoring them, you may probably stop by Vance blog, an authority in this matter.

You will soon realize that creating a typed event class and start producing events is very easy. However is also easy to introduce hard to diagnose errors. So my first post about the "hidden" goodies in this new block (SLAB for friends) will let you know how you can get a good helping hand for authoring your custom EventSource classes.

One more link in codeplex with some discussion forums and additional links as well. Also Unity 3.0 has it's own MSDN page.

 

Stay tuned!




Enterprise Library 6.0 and Semantic Logging

Sun, 17 Feb 2013 22:01:00 GMT

 Finally the Semantic Logging CTP from Enterprise Library 6.0 is out.

In this blog post Grigori presents an insight overview of this new block along with some comments regarding how this blocks leverage the new features in NET 4.5 and ETW.

Here is the Release Notes that summarizes the goal of this block along with a great video that shows hot to use it.

In future post I will try to add some insight on the out-of-process mode that you can use SLAB (Semantic Logging Application Block).




Tracing to ETW with Enterprise Library v5

Tue, 13 Nov 2012 17:40:00 GMT

While I was looking for a way to use ETW Tracing using the logging EntLib infrastructure, I found out that using a handy trace listener like SystemDiagnosticsTraceListenerData configured to use the built-in EventProviderTraceListener was all that I needed. With just setting the event provider ID which is simply a GUID added to the initializeData attribute and I'm good to go.This is an example of the config section configured with the above settings:    Now you can simply use it by calling Logger.Write(message) o or any other logging call. You can also add this listener to any category you want and optionally use formatters as well.Remember that you can activate your event provider by opening and "Admin" command prompt and running:logman create trace myApplication -p {D2FAAB3F-5D61-42B5-A014-D1A658BEE0B7} -o .\etwtrace.etl -owlogman start myApplication You can check that your data collector set is up and running by inspecting with Performance Monitor (perfmon.msc) and look into "Data Collector Sets\User Defined). There you may check for many properties of your process and also verify the etl file location which by default is in Windows\System32 folder with a file name like "etwtrace_000001.etl".After you are done with the tracing, simply end by typing:logman stop aExpense logman delete aExpenseThe information saved to the .etl file may be consumed by different tools described below and we can also use the tracerpt to generate an xml file and inspect the traced data or analyzing with WPA (Windows Performance Analyzer).    [...]



WSSF 2010 Open Source for Visual Studio 2012

Wed, 19 Sep 2012 14:18:00 GMT

I just ported the Web Service Software Factory Modeling Edition (2010 Open Source) version to VS2012 RTM.

Notice that this is only the source code that builds and run from VS2012 (Windows 7 and Windows 8) but there's not refactoring or leverage of the new features in NET v4.5.  

You can download it from here. Don't forget to read the Know issues in the download page. 




CQRS Final version shipped

Fri, 27 Jul 2012 16:39:00 GMT

The final version of the CQRS project is finally out! 

After many optimizations added to the RI the final code and book is finally public here.

Some further rich insight details can also be found at Grigori's blog.

Enjoy it!




Acceptance Tests & Specflow

Tue, 15 May 2012 02:32:00 GMT

An interesting post from the CQRS project was posted about the use of Specflow and the project Acceptance tests.

Here you will find the captured experience for creating the tests scenarios and the two approaches used for implementing the specs written with Specflow.

I will add further details on this experience in a future post while we get closer to the final version of the project.

Stay tunned!




CQRS Journey V1 is live

Wed, 09 May 2012 17:25:00 GMT

For the last two month I've been working with the CQRS Journey team helping with the spec definitions and acceptance tests using Specflow which was a very interesting experience.  

As you can see, this is basically a project from the Patterns & Practices group that shows an experience about an implementation of CQRS pattern into a reference implementation application.

You can find more fresh news and further details about this project in its blog.

In a future post I will highlight some of the experience and details of how we implemented the Acceptance Tests using Specflow within this project.

Stay tuned!




Web Service Software Factory 2010 Open Source CTP

Wed, 26 Oct 2011 11:01:00 GMT

As a follow up of the WSSF Modeling Edition there is a new "flavor" of this tool where the main news is the versioning plan. The basic idea is to release the source code to the comunity in an "open source" model so any user may be free to update and add new features.

As part of this move, there are some additions and fixes that you will get in this open source release (CTP as the time of this post).

  • GAX free (no more dependency with GAX)
  • WCF Security Code Analysis added from WSSF v2 and updated to FxCop version from VS2010
  • Bug fixes
  • No more WSSF solution template (the WSSF menu options will show up on any compliant project and will let add models and WCF/ASMX templates)
  • Translator recipe removed. Writing translators is up to the user or it might be added using MEF extensibility.

You can download it and get some other build steps from here.

Stay tuned for the "official announcement" in Don's blog.


 




WCF Data Services and Custom Authorization

Thu, 16 Jun 2011 10:33:00 GMT

In the last post about Decoding Messages in WCF Data Services I showed a code sample about how to decode an incoming WCF Message in a Data Service. In this case I will show how we can use this decoded message inside a ServiceAuthorizationManager derived class to perform some authorization depending on the content (i.e. grant access to some entities according to the logged on user).   The following configuration set an authorization manager using Windows Authentication.                                                                                                                   [...]



Decoding messages in WCF Data Services

Wed, 15 Jun 2011 16:33:00 GMT

Recently I had to solve an interesting problem while working with WCF Data Services and how to decode an incoming WCF message so I can do something useful with its content.Let’s say that you have an already configured Data Service and you intercept the message using some sort extension/interceptors in WCF. Here is an example of such function:  private static string DecodeMessage(ref Message message) {     if (message.IsEmpty)          return null;       using (MessageBuffer buffer = message.CreateBufferedCopy(Int32.MaxValue))     {         Message copy = buffer.CreateMessage();         using (MemoryStream ms = new MemoryStream())         {            copy.Properties.Encoder.WriteMessage(copy, ms);            ms.Flush();            message = buffer.CreateMessage();            return Encoding.UTF8.GetString(ms.ToArray());                                }     } } As you can notice, after some basic message validation, we create a copy of the message to avoid changing the state of the original message which is typically passed by reference. Then we write all the content to MemoryStream which is handy for read/write operations. The key here is to use the encoder specified in the message properties which may be MTOM for this kind of streamed message but it might be different according to the configured binding. After writing all the message content to memory, we may read from memory and create a string so we can easily inspect the actual message. If we know that we will always deal with XML content, we can use an XmlReader or the like to manipulate the content without the string conversion which may pay a performance penalty on large messages. In the above code, we return the content as string because we may get an XML or Text formatted message so we can process accordingly. In the next post I will show how we can use this code for inspecting a message to perform a custom authorization with ServiceAuthorizationManager. Stay tuned! [...]



Channel 9 presentation with the Composite Services Guidance

Mon, 11 Apr 2011 23:34:00 GMT

Take a look at this presentation if you want to get a good idea of what is this project directly from Dmitri, the "father" of the creature.

And new videos for Service Routing and Inventory Centralizartion patterns.




New Videos for Composite Services Guidance - CTP2

Thu, 31 Mar 2011 10:27:00 GMT

You can get the latest videos for Composite Services Guidance CTP2 that show many features of this project.

Videos Download

Video 1: Repair and Resubmit
Video 2: Analytic Tracing
Video 3: Custom Probe Function
Video 4: Termination Notification
Video 5: Service Routing
Video 6: Inventory Centralizatioin
Video 7: Inventory Centralization Part 2 - Inventory Endpoint



Inventory Centralization Patterns with Composite Services CTP2

Wed, 16 Mar 2011 11:59:00 GMT

Recently my friend and coworker Dmitri Ossipov started his blog with great posts about some of the details and many useful links to well know and documented SOA patterns that this CTP includes.  

Check this out for details on how the CTP implements on of the most public patterns, the Inventory Centralization Patterns, in this case for Schema, Policy and Contracts.




Patterns & Practices: Composite Services CTP2 is Public

Wed, 09 Mar 2011 12:20:00 GMT

Finally the last CTP and pre-release version for the Composite Services is out. There were quite a lot of changes since CTP1. We added many new samples and many enhancements to the repository (DB) which is now called Inventory in sync with SOA Patterns. Here is a brief list of the main changes according to the included documentations.   Changes and additions in this release This CTP release contains reusable source code and samples to illustrate implementation for the following patterns and scenarios: Repair and Resubmit – this pattern is implemented in ESB Toolkit 2.0 as part of Exception Management Framework (EMF). This code drop provides code sample how to implement this pattern for Windows AppFabric workflow service, using Exceptions Web Service and workflow activities to create fault message, which will be created in EMF database.  Analytic Tracing – this code drop contains reusable code and samples for implementing ETW tracing: event collector service and database that store collected events. This capability may be used for scenarios that need flexibility on how collected events are decoded and processed via extensibility points you can configure and implement:  plugins and event decoders with leveraging ETW tracing capabilities provided by the event collector service.   Inventory Centralization – this code drop contains service catalog database, web services and samples to show how to implement Metadata Centralization, Schema Centralization and Policy Centralization patterns.  Service Virtualization – we included sample for implementing this pattern using WCF routing service( which is part of .NET framework) and service metadata centralization capabilities to define routing service metadata in service catalog. Termination Notification – we included sample for implementing this pattern using sample WCF service and policy centralization capabilities provided by this CTP release. You will also find many new videos that will be uploaded to the home page any time soon. Stay tunned for new posts regarding implemetation details and advanced customizations for custom policy exporters/importers and monitoring.UPDATE: Check further details on Dmitri's great blog here: Composite Services CTP2 Released and Applying Inventory Centralization Patterns with Composite Services CTP2[...]



Charla Hijos Digitales

Tue, 09 Nov 2010 04:37:00 GMT

Para bajar la presentacion, haga click aqui.

Tambien se encuentra el video de la charla aqui.




SOA-based Composite Service Guidance CTP1

Fri, 29 Oct 2010 17:18:00 GMT

If you work with services and you are familiar with SOA Patterns you may take a look at the new project from patterns & practices for composing services.

As described on the home page this P&P project provides guidance for building enterprise SOA-based composite service applications. It provides design and implementation patterns for service discovery, composition and integration through written guidance, reference implementations and re-usable source code.

There are many features included tin this package and I will try to post some of them in series of quick notes to show how you can apply some of them in your applications. As part of the upcoming posts, you may find areas like WS-Policy and how to implement your own policies using native WCF API, or some advanced tracing capabilities provided by ETW infrastructure.

You can download the bits, documentation and also some briefing videos that shows an onverview of the main features here. I would suggest to see the videos if you what to take a quick glimse and then grab the bits and give it a shot.

Stay tunned for more details on this.