Subscribe: Ralf's Sudelbücher
Added By: Feedage Forager Feedage Grade B rated
Language: English
book  code  communication  data  it´s  net  new  nsimpledb  nstm  service  simpledb  software  transaction  transactions  wcf 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Ralf's Sudelbücher

Ralf's Sudelbücher


Aspect-Oriented Programming made easy with Event-Based Components

Tue, 10 Aug 2010 08:06:17 GMT

(image) Aspect-Oriented Programming (AOP) can help to clearly separate concerns. But it´s kinda cumbersome to do without a tool. At least if you want to do it along the beaten path of object orientation. But things become much more easy, when you switch your thinking to a different paradigm at least during high level design.

In a three part series on AOP with Event-Based Components (EBC) I´ve describe how this can be done. Easy AOP without a tool is possible. And it even makes your designs more understandable, I believe.

Part I introduces a scenario to apply AOP to. It´s a text file indexing application. In Part II I show how to beef up the application with simple aspects like logging, exception handling and validation. Then in Part III I use AOP to make the whole application multi-core friendly. Some parts become asynchronous, other´s are run in parallel. And the domain logic stays oblivious to that.

Come on over an read about EBC and AOP on The Architect´s Napkin

Event-Based Components – For Easier Software Design Say Goodbye to the Usual Object Orientation

Tue, 27 Jul 2010 07:13:00 GMT

Have come to feel very uneasy with the usual object orientation. It simply does not deliver on its promises. That´s of course not the fault of object oriented languages like C# or Java or C++. It´s the fault of those who use them in a way that leads their code straight into a brownfield. And it´s the fault of those who cannot provide easy enough guidance in using these languages.

The situation is very dire. I have seen only a very few developers who do not fear clean whiteboard. Because when they are asked to draw the design for a software they hardly know what to do. Where to start, what to do next, when and how to begin coding so that in the end an evolvable whole is created.

Class diagrams are definitly not enough as a design tool. Nor are three boxes stacked upon another labeled “Layered architecture”.

Also Domain Driven Design (DDD) falls short of providing true guidance. It is full of valuable abstractions and concepts – but how to apply them to a software design task at hand?

Maybe this is bullshit to you and you find software design just plain easy and straightforward. You deem UML and OOA/OOD enough. Your software is based on OO principles and is highly evolvable. Then I congratulate you and would like to know more about how you´re accomplishing this.

(image) But maybe, just maybe my decription resonates with you because you too feel uneasy when it comes to software design (as opposed to software implemention aka coding). Then I´d like to invite you to follow along a series of articles I´m writing up in my architecture blog:

Over the past months I´ve tried out a different way of designing software. A less object oriented one. One that´s less dependency focused, that does not need mock-frameworks for testing anymore. One that tries to deliver value to the customer right away. One that makes it easier to derive software architecture from requirements. One that makes AOP a first class citizen of any design. One that prepares designs for multi-core execution and distribution. And finally: One that keeps design and implementation in sync.

Sounds too good to be true? Well, see for yourself. It doesn´t cost you anything. Just hop over to The Architects Napkin – and suspend your disbelief for a couple of days while I´m writing up my current view of what I so far call Event-Based Components.

I´m looking forward to fruitful discussions with the community.

Doing asynchronous distributed request/response service calls without WCF

Mon, 15 Mar 2010 17:47:37 GMT

In my previous blog post I hopefully was able to demonstrate how low the entry barrier is to asynchronous remote communication. It´s as easy as hosting a service like this    10 using(var serverSpace = new CcrSpace().ConfigureAsHost("wcf.port=8000"))    11 {    12     serverSpace.HostPort(    13                     serverSpace.CreateChannel(t => Console.WriteLine(t)),    14                     "MyService"); and connecting to such a service like this:    16 using(var clientSpace = new CcrSpace().ConfigureAsHost("wcf.port=0"))    17 {    18     var s = clientSpace.ConnectToPort("localhost:8000/MyService");    19     20     s.Post("hello, world!"); Under the hood this is net.tcp WCF communication like Microsoft wants you to do it. But on the surface the CCR Space in conjunction with the Xcoordination Application Space provides you with an easy to use asynchronous API based on *Microsoft´s Concurrency Coordination Runtime. Request/Response Calling a service and not expecting a response, though, is not what you want to do most of the time. Usually you call a service to have it process some data and return a result to the caller (client). So the question is, how can you do this in an asynchronous communication world? WCF and the other sync remote communication APIs make that a no brainer. That´s what you love them for. So in order to motivate you to switch to an async API I need to prove that such service usage won´t become too difficult, I guess. What do you need to do to have the server not only dump the message you sent it to the console, but to return it to the caller? How to write the most simple echo service? Compared to the ping service you saw in my previous posting this is not much of a difference. Here´s the service implementation:    11 public static string Echo(string text)    12 {    13     Console.WriteLine("SERVER - Processing Echo Request for '{0}'", text);    14     return text;    15 } Yes, that´s right. It´s a method with a return value like you would have used it in a WCF service. Just make sure the request and the response message types are [Serializable]. And this is the code to host this service:    19 using(var serverSpace = new CcrSpace().ConfigureAsHost("wcf.port=8000"))    20 {    21     var chService = serverSpace.CreateChannel(Echo);    22     serverSpace.HostPort(chService, "Echo"); The only difference is in the two type parameters to CreateChannel(). They specify this is request/response channel. The first type is for the request type, the second for the response type. When does it start to become difficult, you might ask? Async communication is supposed to be difficult. Well, check out the client code. First it needs to connect to the service:    40 using(var clientSpace = new CcrSpace().ConfigureAsHost("wcf.port=0"))    41 {    42     var chServiceProxy = clientSpace.ConnectToPort("localhost:8000/Echo"); This isn´t difficult, or is it? Like before for the one-way ping service the client just needs to specify the address of the service. But of course the local channel needs to match the remote channel´s signature. So the client passes in two type parameters for the [...]

Easy remote communication without WCF

Mon, 15 Mar 2010 09:45:02 GMT

If you´ve read my previous posts about why I deem WCF more of a problem than a solution and how I think we should switch to asynchronous only communication in distributed application, you might be wondering, how this could be done in an easy way. Since a truely simple example to get started with WCF still is drawing quite some traffic to this blog, let me pick up on that and show you, how to accomplish the same but much easier with an async communication API. For simplicities sake let me put all the code of client and server in just one file. In the following sections I´ll walk you through this file. The service it implements is simple. I start with a ping service. The client sends a text message to the server; the server dumps the text to the console. The implementation is based on Microsoft´s CCR and two layers of abstraction encapsulating the CCR as well as WCF. See, I´m not saying WCF is bad per se; I just don´t want to use it in my application code. As a low level communication infrastructure the WCF is just great. The two layers of abstraction on top of the CCR and WCF are the CCR Space and Xcoordination Application Space (or AppSpace for short). Both are open source. The CCR Space wraps CCR concepts and the AppSpace to make it easier to work with them. And the AppSpace enables you to use CCR ports for remote communication. From now on I´ll refer to both as “the Spaces” or “Space based communication”. If you want to follow along my samples download the binaries from the CCR Space project site and reference the CCRSpace.dll as well as microsoft.ccr.core.dll. Service implementation With the Spaces it´s easy to define a service. No special service contract is needed. You can use any method with just one parameter and no return value. The parameter type needs to be serializable, though. Use the [Seriablizable] attribute for your own message types.     1 using System;     2 using System.Threading;     3      4 using CcrSpaces.Core;     5      6 namespace AsyncSimple.Server     7 {     8     public class Program     9     {    10         public static void Ping(string name)    11         {    12             Console.WriteLine("SERVER - Processing Ping('{0}')", name);    13         } Of course there are ways to define services with several methods (or to say it differently: which can process different message types). But for now let´s keep the service as simple as possible, like I did in the WCF sample. Service hosting Next you need to host the service. For that you create a CCR Space (CcrSpace) and configure it as a host using a certain transport layer (line 17). I´m using WCF with net.tcp binding as a transport, but I could also have used raw TCP sockets or WCF with named pipes or Jabber or MSMQ. By passing “wcf.port=8000” to the configuration method the transport layer type is selected and the listening port specified.    15 public static void Main(object state)    16 {    17     using(var serverSpace = new CcrSpace().ConfigureAsHost("wcf.port=8000"))    18     {    19         var chService = serverSpace.CreateChannel(Ping);    20         serverSpace.HostPort(chService, "Ping");    21     22         Console.WriteLine("SERVER -[...]

Becoming asynchronous – The first step towards distributed applications

Sun, 21 Feb 2010 10:41:01 GMT

In my previous blog post I argued WCF was not the most usable and most easy to learn way for communication in distributed applications. This is due to its focus on synchronous communication (even though you can do asynchronous communication as well). Distributed applications by their very nature cannot communicate synchronously. Their parts are running on different threads (or even machines). And communication between threads is asynchronous. Always. And if it appears otherwise then there is some machinery behind the scenes working hard on that. Now, if distributed functional units are running on different threads, why would anyone even try to hide that fact? Why do Web services, Enterprise Services, .NET Remoting, and also WCF abstract away this inherent parallelism? Well, it´s because asynchronous communication and asynchronous systems are harder to reason about. Great, you could say. That´s what I want: easy to reason about systems. Isn´t that purpose of abstraction? Making complex things easy? Well, sure. We love C# as an abstraction for how to command a processor. Nobody would want to go back to low level assembler coding. But not just any abstraction is a good abstraction. Some abstractions are, well, “unhealthy” or plain wrong. Think of the abstraction “man is like a machine” and all its implications for the health industry. Abstractions usually have a limited scope. Treating humans like a machine when “repairing” a wounded finger might be ok. But when treating cancer, physicians surely should view their patients differently. So the question is: When to stop using an abstraction? Or whether to use it at all. That´s what I´m asking myself regarding WCF. When should I stop using it and start coping with the asynchronous nature of distributed systems myself? My current answer is: I should not even start to do remote communication with any synchronous API. And my reason is simple: The communication paradigm (sync vs async) is very fundamental to any application. Switching it in mid course is usually very hard. Async communication is the sine qua non for truely scalable systems. Our industry is notoriously bad at forseeing the course of development of applications. Many start small, as a prototype – just to become mission critical with the need to scale far beyond the initial purpose. (I´m sure you too know at least one of those MS Access applications that have become unmaintainable but need to live on because today maybe hundreds of people are depending on them.) Taken together this means: Systems start small, even when distributed. So they use a sync communication API. Then they grow, need to scale, and thus need to switch to async communication – which then is probably infeasible or very hard to accomplish. So when I´m saying, “You should not even start to do remote communication with any synchronous API.”, I´m just trying to apply lessons learned. If we know switching from sync to async is hard, and if it´s very likely this switch will be necessary some time in the future (because, well, “you never know” ;-)… then, I´d say, it´s just prudent to not go down the path, that might be easier at the moment, but will be much harder in the future. There´s a time and place to do synchronous communication between functional units. And there´s a time and place to do asynchronous communication. The latter being always the case when developing multi-processor code or distributed code. The earlier a developer learns to see this boundary the better. Enough with theory for now. Let´s get our hands dirty with some code. Switching from sync to async using the CCR How can you start with asynchronous programming? Let me say that much: Don´t even try to manage threads yourself! Instead leave that to some framework or runtime. Concentrate on an easy programming model. The programming model I´m going to use is purely message oriented and based on “pipes” th[...]

WCF is not the solution but the problem

Sat, 20 Feb 2010 03:43:08 GMT

The title of this post has caught your attention? So let me explain what I mean by it: I think WCF is great. It´s the best communication framework Microsoft ever has come up with. WCF solves a lot of problems of its predecessors, it is tremendously extensible, and it supports the main communication paradigms (sync, async, P2P). WCF certainly is a solution of some kind. But to whom? Who benefits from all these capabilities of WCF? Is it the average application programmer? I doubt it. Remember when first Webservices and .NET Remoting came out? The Microsoft developer community rejoiced. What a relieve compared to sophisticated COM+ and raw TCP socket programming. Developers were so glad, remote communication finally became easy, usable. Unfortunately this did not last long. Webservices hit their limits and needed to be amended with tons of WS-* standards. .NET Remoting hit its limits when it turned out that remote stateful objects really were a bad idea in terms of scalability. So Microsoft set out to unify remote communication and do close any conceptual holes. Out came true message oriented WCF, the one size fits all communication API. Well, great. I love WCF for this. But I don´t like it for two things: I strongly believe that remote communication should always be asynchronous. Yes, always. Because that´s how autonomous systems communicate. And if you deny that fact you run in all sorts of problems. It´s a case of LOLA. So WCF is betting on the wrong horse with its focus on synchronous communication. (And I say “focus” because you can do async communication with WCF. But do you really want to?) WCF tries to be a Jack of all Trades. Its drive to unification is so strong it has lost sight of usability. Webservices and .NET Remoting had so much appeal because they promised to finally make remote communication simple. But then they failed. Not because their APIs did not live up to the promise of simplicity, but because they too bet on the wrong horse: synchronicity. Let me say it like this: WCF is great in the hands of experts. But Microsoft should not expect developers to become WCF experts en masse. I feel reminded of the early 1990s. Back then there was the Windows API for doing Windows programming. And you could do cool stuff with it. But, let´s be honest, you had to be an expert for doing that. And even Charles Petzold had not been able to turn masses of developers into expert Windows GUI programmers. Enter VB 1.0. And the world changed completely. Suddenly just about any developer was able to do Windows GUI programming. Maybe not all outcomes were as beautiful and usably as their creators thought, but anyway. Windows GUI programming had become a mass phenomenom almost overnight. Today WCF to me seems to be the new Windows API. So much is possible – if you´re an expert. But even Don Box and Juval Löwy together won´t be able to turn developers into WCF experts en masse. That´s why I think WCF is not the solution but the problem. By its very nature it stands in the way of solving the remote communication problems for Joe Developer. It has made it much easier to do certain things for many developes. But still the average developer is baffled by its complexity. Almost three years ago I wrote a small blog article on how to start using WCF: A truely simple example to get started with WCF. And since then it´s one of the most popular articles in this blog. Why´s that so? Because WCF is so darn difficult to learn, you really need all the help you can get. That said, what can we do? Put up with WCF? I don´t think so. We need the equivalent of VB for the world of remote communication. We need some layer of abstraction on top of WCF lie VB was on top of the Windows API. WCF is here to stay. Great! But as an application developer I certainly don´t want to need to use WCF directly. To an application programmer WCF like Tcp Sockets are a problem, not a so[...]

Conscious Incompetence - The need to transcend conventions

Wed, 17 Sep 2008 07:09:50 GMT

In his recent blog posting Seth Godin once again questions the value of competence. Sure, he does not want to people dumber. He just argues that sole reliance on competence as a compass to navigate the future can - well - be a hindrance. He´s written about it already in 1999 and made clear, that competence is about accomplishing something on the basis of existing knowledge - and thus is different from finding new ways of doing stuff. Whoever is competent is not necessarily innovative or imaginative. But that´s what we need in the face of constant change. If the environment keeps changing you need to constantly adapt. Adaption is trying out new ways of coping with the environment - hopefully finding better ways to deal with it than in the past. So adaption needs innovation, not competence. To understand what Seth Godin means and what I see necessary for the software industry let me put the argument about competence into perspective: In the beginning of any issue there is incompetence. People have a hard time to get things right. The need to build up competence. They need to gather a body of knowledge and rules. Conventions need to be established on how to most effectively reach desired results. This is the pre-phase of any issue. It´s pre-conventional. Then there is a long phase of competence. It´s about rules, regulations, canonicallity. Conventions rule, so to speak. There is a way to do things right. To become competent you learn to adhere to the rules. Whoever knows and executes the rules best is most competent. You don´t know the rules, you´re incompetent - say the competent ones. It´s a phase of duality. The good are the competent people, all others are the bad who need to be converted (or just faught). To the competent ones this phase is the pinnacle of development. But, alas, the competence phase is just a phase. Although many can live in it pretty well, in the end it´s a dead end. Innovation is hard under a regime of competence driven people. Enter the next phase: After competence comes... conscious incompetence. Transcending competence is about knowing when it´s right to apply - and when not. Whoever "is trans" (and not just competent) knows the rules, but feels free to abide by them or not. He knows about the reasons behind the rules, their history, the conditions that once formed them. So if conditions change he can step over any no longer fitting rules and start anew as a "pre". The cycle of pre->conventionalism->trans starts again. And with it begins innovation. Becoming trans might not be for everyone in the competence phase. But at least the competence people should recognize the importance of stepping up. So they should allow for people to become trans and move on. They should even foster trans-formation. Not seeing beyond the pre- or conventionalism-phase is falling prey to the pre-trans-fallacy. It´s either asserting there is nothing beyond competence. Or it´s asserting to already be trans. The latter might be more dangerous, because it´s mostly mixing up being pre with being trans. "No rules" is true in the pre and trans phase - but for different reasons. Whoever is pre denies the rules or the necessity of any - just per se. But who´s trans has gone through learning rules but sees their limitations - and thus does not feel compelled to abide by them. However, being trans means to empathically admit the (passing) phase of conventionalism. So if we want to move on in the software industry we need to be conscious of not falling into the pre-trans-fallacy trap! Otherwise we might get stuck with our software projects in the ever changing morast technologies and requirements. PS: If you want to read more about the pre-trans-fallacy try to google it. But never mind the context of spirituality and esoteric thinking. Although the fallacy got pointed out first in those circles it does not mean it can´t be applied to technical issue[...]

What´s in a Book?

Fri, 25 Jul 2008 21:58:46 GMT

As I read Kevin Kelly´s "Fate of the Book" I come to wonder what this debate he´s referring to is all about? Is it about form or content? Is is about texts as opposed to video or audio? Is it about texts of a certain minimum length and/or structure as opposed to text snippets? Or is it about a certain physical container for texts as opposed to digital texts? Or is it about certain types of physical containers? Until digital word processing it was pretty clear what a book was: a text longer than a couple of pages bound and put between covers. Text of a minimum length in a certain physical form made a book. Since then, though, because we all write texts using word processing software and don´t need to print them out anymore to have other´s read them, since then what a book is has changed. Or at least if you talk about books you need to be more specific what you mean. Today, I´d say, a book can be at least two different things: it can be the traditional book as described above. Go to a bookstore of your choice and you find thousands of them there. Or a book can be just a digital text you call a "book". It could be just 10 pages with a single sentence on each page or it could be 500 pages full of small print text. If you assume this point of view, it´s pretty much up to you what you call a book. Well, before you call a digital text a book, I think, something more needs to be added. Just text is not sufficient. Otherwise any blog posting like this would be a book. If you want a text to be a book, you need to prep it up a little bit. You need to make it print-ready. It should be typeset on electronic sheets of paper; also it should sport at least a title page. But other than that... pretty much any text can be called a book. Because, if you can print it and bind it, well, it becomes a "traditional" book. So my bottom line is: essentially the book is in the eye of the beholder. Take any text you like, print it out, bind it, voilá, there´s your book. But that´s certainly not what the debate is about. What is in question is: What´s the fate of texts longer than a couple of pages? And what´s the fate of the physical form of the book - regardless of whether it containers 5, 50 or 500 pages? Physical Books - Quo Vadis? It´s difficult but I´ll try to abstract from my personal taste. I like physical books. But just because I like them they don´t need to exist indefinitely. So if I try to subtract my emotional attachment from the picture, what´s left? I think the benefit of having a physical book in one´s hands is underestimated. Reading a book is more than "taking in" a text by scanning pages full of letters. It´s like following a conversation right in front of you. There are not just words, but real people who send signals on different "channels". It´s how they look, how they move, how their whole body language is. Following a conversation in a conference call is much more difficult. Likewise reading a text online is more difficult than reading it printed out. And reading it on just a couple of loose pages that came out of a desktop laser printer is more difficult than reading it as a book. A book provides a context, it provides input through more than the visual sense. A physical book makes a text tangible - even more than some printouts. It literally manifests the thoughts behind a text. If we realize this, we realize the age of the physical book is not over. Because there will always be texts which highly benefit from being "taken in" with more than the visual sense. And we should not think ePaper or Amazon´s Kindle are a danger for physical books. They simply don´t provide the total sensual input of a physical book. Whoever want´s to read most effectively and with most please will always want a book in his/her hands. Digital Books - Quo Vadis? Digital books, i.e. digital texts of a certain length and form, as opposed to[...]

New blog on software architecture - The Architect´s Napkin

Thu, 12 Jun 2008 15:40:33 GMT

Since I´m mostly concerned with software architecture and my clients are asking again and again when I´m going to write a book about the topic, I finally decided to set out and compile the material to go into the book. And I decided to do it publicly, in a new blog.

Not that I haven´t done that before here and in my German blog. But now I´ll try to be more comprehensive, put everything in a single place, and add some new stuff I have not written about before. Plus, through a blog all´s open for discussion.

So if you like, have a look at The Architect´s Napkin. It´s the title of my blog, because I think, software architecture is not an arcane art to be practiced by just a few chosen in ivory towers, but can and needs to be practices by almost all developers at some point in time. So it better be easy - and what can be more easy than something that can be done on the back of a napkin?

So the architectural images you´ll see in the blog are like this


or like this


None will be more complicated than whatever fits on a napkin in a readable and understandable way. I strongly believe in the power of visualization; and I believe that any minute invested by an architect into a simpler depiction will save his developers hours of head scratching.

Hope to see you over there at the bar at I´ll be there sketching some architectures while sipping a cocktail...


Software Architect 2008 Sample Code

Fri, 06 Jun 2008 16:53:00 GMT

(image) Please find the sample code for my presentations at Software Architect 2008 on Aspect Oriented Programming with PostSharp and Software Transactional Memory with NSTM here for download:

If you´ve any questions, feel free to contact me by email.


Component orientation explained - Modern software development viewed from a musical perspective

Sun, 02 Mar 2008 21:36:28 GMT

You´re fluent in object oriented programming. But now and again you´re wondering what the fuzz about component orientation is? There is supposed to be more to it than just using 3rd party controls in your user interfaces. But, what and how? Component orientation is about higher productivity, easier maintenance, better testability, more flexibility, and - if you´re fond of it - reusability. But how´s that? How does component orientation reach all those lofty goals? The trick is pretty simple: component orientation takes the basic design principle of loose coupling very seriously. But instead of now explaining contract-first design (CfD), IoC containers, binary code units, and component workbenches, let me demonstrate component orientation in a more tangible way with a musical example. Requirements The requirements for my musical project are: Produce a recording of the simple piece "Bell-ringers" (Source: "Bell-ringers" by Katherine and Hugh Colledge, (c) 1988 by Boosey and Hawkes Music Publishers Ltd.) as depicted below: The requirements are clear, since a complete requirements document has been provided by the imaginary customer. Additionally the customer has stated, he´d prefer the piece played on the violin. However... although I can read and understand the requirements I can´t play the violin. But that´s not really much different from software development, isn´t it? Often a solution needs to be developed with a technology you´re not familiar with; or you are supposed to adopt a process you´ve no experience with. So I guess the requirements are pretty realistic, even though it´s just an analogy.   Product Out of didactic considerations now let me already present the product I developed according to the above requirements. So first just listen to my "Bell-ringers" recording: Maybe it´s not exactly what you or the customer expected, but it´s on its way to what a true violin expert like Nigel Kennedy whould have produced ;-) In that, though, it´s also close to reality, isn´t it? Who has ever given a customer what she had expected for a first release? Component oriented development Now that you know the requirements and what I delivered to the customer, let me take you backstage. How did my component oriented development process work? 1. Decomposition into components First I determined the components the final product should be build from. For you to understand this let me define component in a somewhat unorthodox way as: A component is a part of the product that can be produced independently of other parts. For a musical composition like "Bell-ringers" these parts or basic building blocks are all the different musical notes. I identified a, h, c'#, e, d', a', g'#, f'# and e' to be needed for a "Bell-ringers" production. To the right you see my original "analysis document". In order to compose something from such components, though, more is needed: the relationships between the components have to clear. Components are not just dump parts but serve a purpose. They provide a service to other components. Here´s an addition for the above definition of component: Components have a clear specification as to what services they provide and which other components´ services they depend on. This specification needs to be separate from any component´s realization. Unfortunately here the analogy somewhat breaks. The relationships between musical notes are obvious from the musical score and are very, very simple. Their services are self-contained, so to speak. Nevertheless there are relationships between the musical components. Each musical component (note) has a predecessor and a successor. That´s at least two relationships. And there can be more, e.g. in a chord with several musical notes played at the same time. 2. Component implementatio[...]

NSimpleDB - Use Amazon´s SimpleDB data model in your applications now - Part 4

Mon, 28 Jan 2008 11:17:58 GMT

As explained in my previous postings, I implemented a local/embeddable version of the Amazon SimpleDB data model and API in C#. You can download the sources from my NSimpleDB Google Code Project and build the tuple space engine yourself, or you download the demo application which includes the engine as a single assembly: NSimpleDB.dll. Using the SimpleDB API then can be as easy as referencing the engine assembly and opening a local tuple space file like this: using NSimpleDB.Service.Contract;   ISimpleDBService ts; ts = new NSimpleDB.Service.VistaDb.VistaDbSimpleDBService("hello.ts"); ... ts.Close(); See my previous posting for detailed examples. Access to Amazon SimpleDB The API I devised for the SimpleDB purposedly was quite "service oriented", although the implementation was just local. I did this so my implementation and Amazon´s eventually could be used interchangeably. Back then, though, I did not have access to SimpleDB due to the limited beta. But that has changed in the meantime. I was able to use SimpleDB online and thus have now implemented access to it through the same ISimpleDBService interface. Just instanciate a different service implementation: ts = new NSimpleDB.Service.Amazon.AmazonSimpleDBService("", ""); Instead of the placeholders pass in your Amazon access key id and your secret access key and you´re done. From then on all operations through the interface will run on your online SimpleDB space. Here´s a small example of what you could do: create a domain, store some items into that domain, query the items, retrieve the found items, delete the domain. ts.CreateDomain("mydomain");   ts.PutAttributes("mydomain", "1",                         new SimpleDBAttribute("name", "peter"),                         new SimpleDBAttribute("city", "london")); ts.PutAttributes("mydomain", "2",                         new SimpleDBAttribute("name", "paul"),                         new SimpleDBAttribute("city", "berlin")); ts.PutAttributes("mydomain", "3",                         new SimpleDBAttribute("name", "mary"),                         new SimpleDBAttribute("city", "london"));   string nextToken = null; string[] itemNames; itemNames = ts.Query("mydomain", "['city'='london']", ref nextToken);   foreach (string itemName in itemNames) {     Console.WriteLine("item: {0}", itemName);       ISimpleDBAttribute[] attributes;     attributes = ts.GetAttributes("mydomain", itemName);       foreach (ISimpleDBAttribute attr in attributes)         Console.WriteLine("  {0}={1}", attr.Name, attr.Value); }   ts.DeleteDomain("mydomain"); (With regard to my previous API descriptions only a minor changes has occurred: you need to always pass in the nextToken as a ref parameter instead of an out parameter. But that´s a detail, I guess.) This code runs locally as well as against SimpleDB online without change. Under the hood I´m using Amazon´s own C# API for accessing SimpleDB.[...]

NSimpleDB - Use Amazon´s SimpleDB data model in your applications now - Part 3

Sat, 19 Jan 2008 21:05:21 GMT

In my previous postings about Amazon´s SimpleDB data model and API I explained, what Amazon´s online database service - or to be more precise: tuple space - has to offer in general. If this sounds interesting to you, then now welcome to the desktop. Because it´s the desktop on which you can actually experience what it´s like to use such a tuple space. SimpleDB currently (as of Jan 08) is just in limited beta and you have to line up to get one of the limited test accounts. But you don´t need to wait for Amazon to open up more. I implemented the SimpleDB data model and API in C# for you to integrate in your desktop or web applications. It´s an Open Source project at Google Code called NSimpleDB as short for .NET SimpleDB. I´d say it pretty much offers all features SimpleDB - but as an embeddable database engine instead of an online service. Installing NSimpleDB There are two ways for you to use NSimpleDB: Either you download the source code from the subversion repository of the NSimpleDB site. Then you can browse the sources and compile it yourself. But please note, you need to also install a VistaDb database engine. NSimpleDB internally is based on VistaDb and needs its libraries. As you can imagine I did not want do develop a whole persistence engine just to implement SimpleDB´s tuple space. But you can download an eval copy of VistaDb and need not fear to incur any costs right away. Also VistaDb is working on a free community edition, which I will of course use for NSimpleDB once it´s available. Or, if you don´t want to mess around with the NSimpleDB source code, you can download the small demo application from the download area, which comes with a complete NSimpleDB engine as just one assembly: NSimpleDB.dll. Using NSimpleDB If you´re using the precompiled version of NSimpleDB just reference NSimpleDB.dll from your .NET project. If you compiled NSimpleDB yourself, though, reference NSimpleDB.Service.Contract.dll as well as NSimpleDB.Service.VistaDb.dll from the global bin folder of the source code tree. Also be sure to either have VistaDb installed on the same machine or copy VistaDB.NET20.dll into your projects´s output folder. In any case you then should "open" the following namespaces in your source code: using NSimpleDB.Service.Contract; using NSimpleDB.Service.VistaDb; Opening a NSimpleDB database To work with NSimpleDB you need to manage a connection to a database file like with any regular RDBMS product. You can do it with the method pair Open()/Close() like this: VistaDbSimpleDBService ts = new VistaDbSimpleDBService(); ts.Open("hello.ts"); ... ts.Close(); Just be sure to call Close() at the end. Pass in any name you like to give to your database file. NSimpleDB does not require a certain filename extension. Or you can rely on the compile to generate the code to automatically close the connection. VistaDbSimpleDBSevice implements IDisposable: using (VistaDbSimpleDBService ts = new VistaDbSimpleDBService("hello.ts")) {     ... } Working with domains In order to store any data in your NSimpleDB tuple space you need to create a domain first. It´s as simple as this: ts.CreateDomain("contacts"); No return value, nothing. This operation is idempotent. You can´t create a domain twice. From then on you can use the domain name in other operations. To delete a domain, call the opposite operation: ts.DeleteDomain("contacts"); This operation also is idempotent, don´t worry. And it´s asynchronous. Although the domain will become inaccessible right away for future operations, current operations are not interrupted and the actual deletion will take place at a later time. Reflecting on domains then can be as easy as this: string[] domainNames; domainNames = ts.ListDoma[...]

NSimpleDB - Use Amazon´s SimpleDB data model in your applications now - Part 2

Sat, 19 Jan 2008 08:14:07 GMT

Amazon´s SimpleDB is an exciting new player in the database world. It´s free, it´s online, it´s not relational. SimpleDB is a dynamic database implementing a tuple space. Currently SimpleDB (as of Jan 08) is in beta - but not everyone can get his hands on it. You have to apply and line up for one of the limited test accounts. Nevertheless it´s worthwhile to take a closer look at SimpleDB. It´s a brave step forward by Amazon to offer an online database (accessible via a web service) that´s deviating from the mainstream data model of RDBMS. In part 1 of the series of postings I described this data model: You store tuples (aka items) consisting of name-value pairs (aka attributes) in a SimpleDB "data space" without the need of any configuration. No schema design necessary. No tuple needs to look like an other. Just so called domains are a structuring concept to group tuples. But it´s nowhere written you have to use more than one domain. Even different kinds of items don´t force you to distribute them across domains. Domains that way are more of a concern regarding scalability and quantitative constraints Amazon put on them. A simple SimpleDB API The data model of SimpleDB is simple, so is its API. It´s not based on a query language (although it provides set selection, see below), but rather follows the tuple space concept in that it defines just a small number of methods to read item from and write item to the "data space". Following I´ll use pseudo code to describe the API. I think will be pretty self explaining. In reality Amazon offers a web service to work with SimpleDB, so you´ll use some kind of proxy class in your code. Amazon even published a .NET binding - but hasn´t gotten rave reviews so far. There is much room for improvement. Attributes as smallest data units The smallest piece of data with SimpleDB is an attribute. An attribute is a name value pair like "Name"("Peter") or "Amount due"("000000300.00") or "DOB"("2000-05-12") or "Marked for deletion"("1"). As you can see, values are just strings. It´s like with XML. Attribute names are also strings - and they can contain white space. This makes them easier to read and use as labels in frontends. In addition - and in stark deviation from the relational data model - attributes can have multiple values, e.g. "Phone numbers"("05195-7234", "040-413 823 090", "0170-233 4439"). Amazon suggests, you don´t try to store large pieces of data in attributes, e.g. a multi-MB image. Rather you should put such byte-blobs into some other store - e.g. a file on an FTP-server or Amazon´s S3 - and use the attribute value as a reference. Items as containers for attributes Attributes belong to items. In principle items can contain any number of attributes, but Amazon put some limitations on them. Currently only 256 attributes are allowed in each item. Items can be written as tuples and are identified by an explicit id you have to provide, e.g. "123"["Name"("Peter"), "City"("Berlin")]. The id is called "item name" an again is a string. As you can see, attributes are tuples with unnamed elements, but items are tuples whose elements are named. Domains as containers for items Items are stored in domains. Like them, domains have an id, the domain name. No schema needs to be defined for them. Just pour items of any structure into them as you like, e.g. "contacts"{"123"["Name"("Peter"), "Addresses"("a", "b")], "a"["City"("London"), "Country"("GB")], "b"["City"("Hamburg"), "Country"("Germany")]}. As you can see, domains are tuples, too. Their elements are named tuples, the items. Writing data Roughly you can say, domains are like tables, items are like records in a table, attributes are table columns. So storing data with Simple[...]

NSimpleDB - Use Amazon´s SimpleDB data model in your applications now - Part 1

Fri, 18 Jan 2008 19:37:52 GMT

Have you heard about Amazon´s online "database service" SimpleDB? They describe it like this: "Amazon SimpleDB is a web service for running queries on structured data in real time." So it´s not a RDBMS, because Amazon does not call the data "relational", but just "structured". And you use a web service based API to access the data, not good old ADO.NET. Currently SimpleDB is in beta. You can get a test account to play around with it - if you´re patient. As of this writing (Jan 08) evaluation is limited; you need to apply and queue up to be assigned a test account. I have about 2 weeks ago, but haven´t heard from Amazon since then. But why should you care? Well, SimpleDB would allow you to store data in a database without any setup costs. You don´t have to care about backup or moving to another ISP. You´re data, lots of data, can just stay with Amazon. Just add a web service proxy to you web (or desktop) application and off you go. This certainly make some (or a lot, or at least a growing number of) applications easier to implement. Another reason to care about Amazon´s SimpleDB is its simplicity. In an age where dynamic programming becomes ever more popular and static whatever (e.g. typing, binding) loses value, making persistence more dynamic sure should look attractive. But exactly this is what Amazon´s SimpleDB is about: highly dynamic persistence of structured data. SimpleDB data model With SimpleDB you don´t define a database schema anymore. Your "data space" with Amazon is structured in a very simple way: it´s devided into sub-spaces called "domains" which each contain so called "items" which each contain so called "attributes". That´s it. And you can change the structure of this "data space" at any time. There is no distinction between meta data and data. Creating a domain (which resembles a table in a relational database) is a web service operation like storing an item in a domain.   To make it very clear: You divide your "data space" into domains at your leisure. (Amazon currently just artificially limits the number of domains to 100.) And you stuff items of any structure into these domains. You never define a schema for a domain. The items stored in a domain don´t have to look the same. They can contain any number of attributes; all can differ in their number of attributes. Attributes are name-value-pairs. So items are tuples of arbitrary aritiy. That means, SimpleDB is not a relational database, but a tuplespace. Just throw items/tuples into your SimpleDB instance at your leisure. That´s all their is to SimpleDB persistence. If you like, separate tuples into different domains - but if you do it or not does not make a big difference. For distinguishing between, say, customers and invoices that´s not necessary. It might even be contraproductive, since querying items is limited to one domain at a time. There is no such thing as a SQL Join. The use of domains So why are there domains at all? Probably they help Amazon to make replication of items between servers easier. And it might speed up queries if you distribute your data across domains. So think of domains as easy to set up data partitions in case you have to deal with huge amounts of data. Multi-valued attributes But not only don´t you have to define a schema for a domain and all items/tuples can have a different structure, there´s another deviation from relational thinking: Attributes can have multiple values! So items don´t even comply with the relational first normal form. See the "Phone" attribute in the following item:   It´s not just several phone numbers separated by commas. No! The "Phone" attribute is really structured. You can retrieve (and qu[...]

Code instrumentation with TraceSource - My personal vade mecum

Wed, 31 Oct 2007 19:42:32 GMT

When writing more complex code you cannot really step through during debugging, it´s helpful to put stud it with statements tracing the execution flow. The .NET Framework provides for this purpose the System.Diagnostics namespace. But whenever I just quickly wanted to use it, it turned out to be a hassle to get the tracing running properly. That´s why I wrote down the following, to make it easier next time. How to instrument the code? In the code set up different System.Diagnostics.TraceSource objects. For each area to watch define a trace source name, e.g. "BusinessLogic" or "Validation" or "TextFileAdapter". Each such trace source later can be switched on/off separatedly. using System.Diagnostics; ... TraceSource ts; ts = new TraceSource("HelloWorld", SourceLevels.All); A TraceSource object can be instantiated for just one method or can be kept around for a long time as a global (or even static) reference. To trace the execution use one of the TraceXYZ() methods of TraceSource. Most of the times one of the following will do. TraceInformation() writes an informational message, i.e. with TraceEventType Information: ts.TraceInformation("# of records processed: {0}", n); That´s the same as this: ts.TraceEvent(TraceEventType.Information, 10, "# of records processed: {0}", n); But TraceEvent() can do more. With it you can issue message on different levels, e.g. just informational messages, warnings, error messages. The id (10 above) will show up in the event log in its own column ("Event"). Since tracing can sometimes produce an overwhelming amount of messages, you can filter them. For example you can restrict the output of messages to errors only. But you should not do so in your code. That´s why you should pass SourceLevels.All to the TraceSource ctor. (Without setting a source level explicitly, the default is Off, so you´d see no messages at all!) However, if you want to limit the tracing imperatively to certain levels of messages, you can do so by passing a combination of levels to the ctor, e.g. ts = new TraceSource("HelloWorld", SourceLevels.Critical | SourceLevels.ActivityTracing); How to attach tracing to different message sinks? Tracing messages are written to any sink attached to the tracing infrastructure. A sink can be a text file or the console or an event log. Each sink is represented by a TraceListener object. A trace source can have any number of listeners listening for messages and write them to its sink. Listeners can be attached to trace sources imperatively in your code - or using the App.Config. I prefer the App.Config, because it allows to you add and remove listeners without touching your code. But as long as you´re satisfied with tracing messages being sent to the debug output window of VS.NET, you don´t need an App.Config at all. There is a default listener attached to each trace source. If you want to direct the message to other sinks, though, or filter them, then you need to tweak the App.Config. Here is the most simple trace source defined in the App.Config - but its sink still is the debug output.                         The name needs to match the name passed to a TraceSource object in your code. Now, you can add listeners to this source element to have the source´s tracing messages sent to different sinks:      

Software Transactional Memory VII - Automatic retry of failed transactions

Sun, 05 Aug 2007 17:21:00 GMT

My previous posting on Software Transactional Memory (STM) I concluded with the remark, NSTM was not finished. How true! Here is the next release of NSTM with a couple of improvements. You can download it from Google´s project hosting site. Here´s what´s new: Validation matrix As mentioned in an earlier posting I was not quite satisfied with the validation strategy of NSTM. Even to me it was not entirely clear, when a transactional object (txo) would be validated. I improved on this situation in the lastest release of NSTM (rel. by implementing this validation matrix: When Condition (isolation level + clone mode) What (read mode of txo) Validate on read and on commit serializable + cloneOnWrite ReadOnly (on read),ReadOnly+ReadWrite (on commit) Validate on commit only serializable + cloneOnRead ReadOnly+ReadWrite Validate on commit readCommitted + cloneOnRead ReadOnly no validation readCommitted + cloneOnWrite - Now you can tweak a transaction´s independence of others very clearly: You can make it "subordinate" and fail as early as possible, i.e. as soon as it detects another transaction has changed a value it has read. Or you can make it very "dominant" by not caring for changes by other transactions and even rigorously overwriting them. Automatic retry of failed transactions With databases collisions of transactions are pretty rare. It´s unlikely that two transactions change data in the intersection of their working sets. There´s usually so much data so the intersection is very small or even non-existend. When you commit a database transaction you can be quite sure it will succeed. That´s why optimistic locking has become the only "locking strategy" left in ADO.NET. In-memory transactions, though, are different. The total amount of data on which concurrent transactions work is much smaller than with databases. In addition, in-memory data structures like a queue or stack or list hinge on just a few data items (e.g. references to the first/last element) which are under heavy pressure if several transactions concurrently add/remove elements. If for example two transactions concurrently add elements to a queue thereby updating the same data item (reference to head of queue) it´s likely that one of them fails due to a validation error. However, had this failed transaction been run just a couple of milliseconds later, it would have succeeded, since it would have read its values after the other transaction´s commit. Since collisions/invalid transactions seem to be more likely with NSTM compared to databases, and because the remedy is easy - just repeat the failed transaction -, a remedy should be developed. This remedy is automatically running transactions again on failure due to invalidity, i.e. automatically retrying them. That´s why I added ExecuteAtomically() to NstmMemory. In its simplest form it just executes a delegate synchronously within a transaction (the delegate does not take any parameters and is of type System.Threading.ThreadStart):     1 NstmMemory.ExecuteAtomically(     2                 delegate     3                 {     4                     ...     5                 }     6      &nb[...]

Software Transactional Memory VI - Becoming Aspect Oriented with PostSharp

Thu, 12 Jul 2007 11:26:00 GMT

The API for my .NET Software Transactional Memory (NSTM) I´ve described so far is straightforward, I´d say. It´s close to what you´re used to from transactional dabase access and it´s even integrated with System.Transactions: open a transaction, do some stuff with transactional objects, commit the transaction. All very explicit. Despite its familiarity this explicitness kind of stands in the way of what a piece of object oriented code is supposed to accomplish. It´s something you´ve to concern yourself with although it does not add to the raw functionality you want to achieve. And by being imperative it´s prone be done incorrectly. A more declarative way to do in-memory transactions and less explicitness thus would be nice now that objects have become transactional and automatically threadsafe in general. I thought about that from the being when I started to work on NSTM. Microsofts STM implementation SXM shows, how transactionality could be made more declarative and "invisible": With SXM you just adorn a class with the attribute [Atomic] to make its instances transactional. That´s cool - but so far I did not consider it for NSTM, since it seemed to require fiddling around with proxies. That´s why with SXM you need to instanciate transactional objects through a factory which you´ve to create for each transactional class:     1 [Atomic]     2 public class Node     3 {     4     private int value;     5     private Node next;     6     ...     7 }     8 ...     9 XObjectFactory factory = new XObjectFactory(typeof(Node));    10 Node node = (Node)factory.Create(value); The benefits of using an attribute to make classes transactional to me seem to be lost through this cumbersome instanciation process. That´s why I did not go down this road until now. PostSharp to the rescue But then one day - lo and behold! - a very, very cool tool was revealed to me by InfoQ (which I´ve come to like as much as PostSharp from Gael Fraiteur is a godsend! It´s one of those technologies you´ve waited for all along without really knowing. It´s a technology that will change how you approach software development. At least that´s what his done for me since I first read about it. What PostSharp does is not new, though. It´s a kind of Aspect Oriented Programming (AOP) weaver. And it´s a tool to alter the MSIL code in assemblies. But there are already several APO frameworks out there - also for .NET -, so why another one? There are already tools to alter assemblies like Phoenix from Microsoft and Cecil from the Mono project. What makes PostSharp so remarkable to me, though, is in which way it combines AOP with MSIL modification. It makes it so dead easy! You´re not forced to learn MSIL. The weaving is done automagically for you by an invisible post build step. No factories needed to instanciate classes annotated with aspects, because no proxies are used. For many common types of aspects (e.g. pre/post processing of method calls, field access interception) there are templates that help you get your own aspects up and running in just a few minutes. Congratulations Gael! And thanks for being so responsive to questions and bug reports! Aspect oriented transactions The first thing I tried with PostSharp to make NSTM transactions declarative. This seemed to b[...]

Software Transactional Memory V - Integration with System.Transactions

Tue, 10 Jul 2007 20:14:47 GMT

So far I´ve described my own .NET Software Transactional Memory´s (NSTM) API for managing transactions. It´s close to what you are used to from relational databases, I´d say. But still, it´s my own API and it stands beside what .NET already provides in terms of transactions. With System.Transactions there is already a general way to work transactionally across different resources like database and message queue, so it would be nice if NSTM was another such resource. Juval Löwy [1] has described how this can be accomplished for in-memory data structures. However, making a data structure transactional like he did with the .NET collections does not make it threadsafe. .NET transactions - although attached to a thread - are not designed help make multithreading easier. In addition they cannot be nested truely. Also the data duplication in Juval´s collections is very coarse grained so it will become pretty slow pretty quickly once they are growing larger. Nevertheless the article is worth reading and provides very helpful insights in how .NET transactions work. It helped me a great deal making NSTM compatible with System.Transactions. From the point of view of System.Transactions a NSTM transaction is a resource. Its state - the transaction log (txlog) - is changed during a .NET transaction and either discarded at the end if the .NET transaction is rolled back, or committed which means the transaction log is applied to the transactional objects (txo). Recognition of .NET transactions is not switched on automatically, though. To check, if a .NET transaction is running and whether a NSTM transaction is already enlisted with it, is somewhat costly. NSTM needs to walk up the stack of active transactions for this. But you can switch on System.Transaction integration easily with the flag NstmMemory.SystemTransactionMode: EnlistOnAccess: NSTM checks on each access to a txo whether a .NET transaction is running. If there is no NSTM enlisted with the .NET transaction it begins a new NSTM transaction, enlists it, and will commit it/roll it back, when the .NET transaction ends. EnlistOnBeginTransaction: NSTM checks only for a .NET transaction when a NSTM transaction is explicity started using NstmMemory.BeginTransaction(). If a .NET transaction is running, the new NSTM transaction is enlisted with it. (To be more precise, NSTM even creates two nested transactions: the outer transaction is enlisted with the .NET transaction and the inner transaction is passed back to the application. That way the application can commit/rollback the inner transaction as usual and still get the final vote on the changes automatically from the out transaction coupled to the .NET transaction.) Ignore: NSTM does not care if a .NET transaction is already running. All NSTM transactions are independent of System.Transactions. Here some sample code showing how to use NSTM with System.Transactions:     1 NstmMemory.SystemTransactionMode = NstmSystemTransactionMode.EnlistOnAccess;     2      3 INstmObject o = NstmMemory.CreateObject(1);     4 using (TransactionScope tx = new TransactionScope())     5 {     6     o.Write(2);     7     tx.Complete();     8 }     9 Console.WriteLine(o.Read());    10     11 NstmMemory[...]

Software Transactional Memory IV - Thread-Bound Transactions

Fri, 06 Jul 2007 19:51:20 GMT

I´ve explained in my previous posting, how a single transaction weaves its magic of isolating changes to transactional objects (txo) and atomically making them visible on commit. But what´s the "reach" or "scope" of a NSTM transaction? How many transaction can be open at the same time? Transactions are kept in TLS To answer these questions it´s vital to understand where transactions are stored: in thread local storage (TLS). NSTM transactions are bound to a single thread, the thread they were created on. This is different from System.Transactions which are thread-independent (see [1] for details). But binding NSTM transactions to a thread is on purpose. NSTM is supposed to make multithreaded programming easier by isolating the work done in parallel on shared in-memory data structures. Thus transactions need to thread-local to keep changes made to txos in local transaction logs (txlog). Application code in each thread works with its own transactions which of course can work on the same txos: A txo is just briefly locked during access to avoid inconsistencies during single reads/writes. Since a txo is not just a singel value but containes at least a value and a version number which must not get out of step that´s necessary. But this kind of locking is hidden from you. Don´t worry about it. Nothing is permanently locking for the duration of a transaction. No deadlocks can occur. Rather NSTM bets on optimistic locking as already explained: txos are validated during commit (at latest) to check if they were changed by some other transaction and if so, the transaction fails. It´s the same as with ADO.NET when saving changes made to a DataSet. When you call Read() or Write() on a INstmObject the object retrieves the current NSTM transaction from TLS and delegates further processing of your request to the transaction. The usual transaction scopes TLS does not just point to a single transaction, though. Rather it contains a stack of transactions (txstack) because you can nest them. Check out this code:     1 using (INstmTransaction txOuter = NstmMemory.BeginTransaction())     2 {     3     ...     4     using (INstmTransaction txInner = NstmMemory.BeginTransaction(NstmTransactionScopeOption.RequiresNew, ...))     5     {     6         ...     7     }     8     ...     9 } In line 3 there is one transaction on the TLS txstack, in line 6 there are two on txstack with txInner being the topmost. In line 8 it´s again just one transaction: txOuter. Each transaction of course keeps its own transaction log. By default they are independent. However, they need to be ended in reverse order despite their independence. A transaction opened while another was active needs to be committed or rolled back first. That´s why you should nest transactions like above with using statements. They ensure the right order. But there are several transaction scope options. Above the inner transaction is opened with RequiresNew. This ensures a new transaction is opened although another one is already active. The default however is just Required:    10 using (INstmTransaction txOuter = NstmMemory.BeginTransaction())    11 {    12     ...[...]