Subscribe: Greg Young
Added By: Feedage Forager Feedage Grade B rated
Language: Russian
code  decorator  dynamic  float  floating point  jit  net gyoung  order  originally posted  pattern  posted geekswithblogs  proxy  subject  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Greg Young

Greg Young


Copyright: Greg Young

Blog moving

Fri, 09 Jun 2006 20:30:00 GMT

Originally posted on:

I have enjoyed my time here at geekswithblogs (even the green monster) but I will be moving my blog to Topics the same, URL different.

This blog will be moving to 



C# FP Math Leaky Abstraction

Mon, 05 Jun 2006 10:41:00 GMT

Originally posted on: you have probably seen a post from last Tuesday entitled Floating Point Fun. If you have not read this I would recommend going back and reading it before continuing. In this post I discuss some of the interesting things that can happen when dealing with floating point math in C#, it is important to note that these items did not happen in version 1.x of the framework. The root of these problems is that when in a register the floating point is treated with a different precision than when it is being held in memory. As such you can run into cases where you are comparing a Float32 or a Float64 against an 80 bit register based float. These equality comparisons (or conversions to other types such as an integer) can obviously fail due to the difference in precision. After tracing through the generated assembly, I found a great reference on the subject at David Notario's Blog. David correctly points out that this is not a CLR/JIT issue, in fact changes like this were eluded to in the CLR spec (there is a quote from the ECMA spec on his blog) or here   There was some documentation on this breaking change in 2.0. Here is the listing from the breaking changes documentation In the CLR model, we assert that arguments, locals, and return values (things which you can't tell their size) can be expanded at will. When you say float, it means anything greater than or equal to a float. So we can sometimes give you what you asked for 'or better'. When we do this, we can spill 'extra' data, almost like a 'you only asked for 15 precision points, but congratulations! We gave you 18!'. If someone expected the floating point precision to always remain the exactly the same, they could be affected. In order to faciliate performance improvements and better scenarios, the CLR may rewrite (as in this case) parts of the register. For example, things that used to truncate because of spilling, no longer do. We make these kinds of changes all the time. We believe this is an appropriate change, and it is even called out specifically in the CLI specification, as something which can, and will occur with different iterations: What makes these changes particularly nasty is that you are forced to second guess how the JIT works in order to provide consistent results. In my previous post I used the example of float f = 97.09f; f = (f * 100f); int tmp = (int)f; Console.WriteLine(tmp);   This code will work in either debug or release mode when a debugger is attached, having the debugger attached will disable the JIT optimizations that cause the problem. It does as I describe in the previous post fail when run without the debugger. If we wanted it to work all of the time we would need to write it in the form. float f = 97.09f; f = (float)(f * 100f); int tmp = (int)f; Console.WriteLine(tmp);   The explicit cast to a float forces it to be narrowed back to a float32, without the narrowing it will actually be in a register as an 80 bit float. As such we end with a predictable behavior of always producing the correct result of 9709. The problem I have with this behavior is that it is a leaky abstraction. In order to have our code work properly (and to be efficient) we need to know exactly how the compiler and the JIT intends to optimize our code. This introduces a logical problem though as by its very definition we do not know how the JIT will optimize our code. The JIT very well could place this into a register at some times and not at others or the JIT run on a different platform could offer a different behavior than the JIT we tested with. This becomes especially nasty when dealing constants, consider the following code. float f = 97.09f; f = (f * 100f); bool test = f == 97.09f * 100f; Console.WriteLine(test);   What is the value of test? The abstraction leaks for both the compiler and the JIT. To start with, are the floats actuall[...]


Sun, 28 May 2006 09:24:00 GMT

Originally posted on:

I have been searching for an answer to this one and I am perplexed. returns an array of the generic arguments .. what I can't figure out is if they will ever be out of order. The arguments themselves have a position on them, is it possible that I get them back out of order where I would need to re-order them ...

basically what I am doing is something similar to the following on a generic type definition


                string [] Params = new string[typeArguments.Length];
                for (int i=0;i                {
                    Params[i] = typeArguments[i].Name;            
                GenericTypeParameterBuilder[] typeParams = outputType.DefineGenericParameters(Params);
                Debug.Assert(typeParams.Length == typeArguments.Length);
                for(int i=0;i                    GenericTypeParameterBuilder builder = typeParams[i];
                    Type OriginalType = typeArguments[i];


I have to say as well that I am rather unimpressed with the bridge I am forced to put up here .. it would be alot nicer to just pass through the generic arguments I already have as opposed to creating a string [] then iterating through .. maybe I am missing something with the API?

and for those who know me well .. you probably know what I am working on .. (hint, the topic this is posted in)\

Update: it seems the documentation has been updated to reflect the return being sorted


Fireworks to XAML converter

Sat, 03 Jun 2006 04:32:00 GMT

Originally posted on:


This is looking useful!


Private Object Namespaces and Condition Variables

Thu, 01 Jun 2006 23:04:00 GMT

Originally posted on: knew there was a reason I kept Junfeng Zhang's Blog on my list (even during the slow months). I hadn’t checked the blog in a few weeks but reading it now just made my day. There are two new items listed on the blog. The first is that someone fixed a huge security hole, I have actually run into this particular security hole. Junfeng calls it kernel object name squatting. I have never heard it called by this name but it is a pretty simple problem. Shared objects between process are shared the question is who protects them from prying eyes? Let’s propose you have the following code (note this is a trivial example and likely has bugs in it but it should illustrate the point). static void Main(string[] args) {   bool Created;   Mutex m = new Mutex(false, "MutexWeWillSteal2", out Created);   if (Created) {     Console.WriteLine("MutexCreated");   }   for (int i = 0; i < 100; i++) {     Thread.Sleep(200);     bool havelock = false;     try {       havelock = m.WaitOne(5000, false);       if (havelock) {          Console.WriteLine("acquired lock");          Thread.Sleep(500);        } else {          Console.WriteLine("Unable to acquire lock");        }      }      finally {        if (havelock) {           m.ReleaseMutex();         }      }  } }   As we can see this application is simply starting up, creating a mutex if it does not exist already then simply obtaining and releasing the lock. You can quite easily bring up two of these applications to notice that they are synchronizing with each other. Using a named mutex like this is extremely common in order to synchronize two processes. The problem with mutex is not as great as some objects as I can apply an ACL to prevent people not at a certain level from accessing it. Unfortunately I still suffer from a denial of service attack from applications at my own level. One can quite easily use the debugger (or other tools) to find out the names of the objects I am using (!handle in windbg will bring this right up for me). Once I have that name I can write a bit of code such as the following. static void Main(string[] args) { bool Created; Mutex m = new Mutex(true, "MutexWeWillSteal2", out Created); m.WaitOne(-1, false); Thread.Sleep(int.MaxValue); }   Providing this code access the mutex before our other processes, our other processes will just fail. We have effectively made the other application unable to do anything (the basis of a denial of service attack). What is being introduced in LH is the ability for me to make my two processes share a namespace. As such their namespace can be protected. The malicious program can start but one can avoid it from having access to the mutex explains exactly how it works but basically the processes that want to share the data define a boundary (and requirements to get into the boundary that they both share) This function requires that you specify a boundary that defines how the objects in the namespace are to be isolated. Boundaries can include security identifiers (SID), session numbers, or any type of information defined by the application. The caller must be within the specified boundary for the create operation to succeed.     The second post I found really interestin[...]

Unit Test???

Thu, 01 Jun 2006 01:23:00 GMT

Originally posted on: was poking through Jason Haley's blog today when I came across one of his interesting links (which are usually pretty interesting btw). It pointed to a post by Phillip Haack titled A Testing Mail Server for Unit Testing Email Functionality. I generally enjoy reading about other’s unit testing experiences as I often gain quite a bit of perspective (and get to see a lot of problems I may not otherwise get to see). Basically what he has done is taken an open source SMTP server and used it for in order to allow his unit tests to check to see if an email that was sent through the .NET email libraries actually arrived at its sender. Let me say that he has come up with a very inventive way of automating this integration test. I personally think that his code is extremely useful for automated integration testing. I have to admit, I have done automated integration tests for emailing and I used a far less elegant solution.   As I was reading through the post I came across the comment. As for the semantic arguments around whether this really constitutes an Integration Test as opposed to a Unit Test, please don’t bore me with your hang-ups. Either way, it deserves a test and what better way to test it than using something like MbUnit or NUnit.   Well, let me start by saying it is an integration test at best (I might even classify this as a user acceptance type test) and has no place being a unit test.   He did however hit the appropriate unit test on the head earlier in his post when he suggested an EmailProvider (I say EmailService but that truly is semantics) which could then be mocked for testing purposes. I would personally either create my own abstraction of an “Email” class or use the one from System.Net.Mail as a parameter instead of passing parameters but the reasons for this will have to wait for another post.   That said let’s look at the test case he created to test that email actually got delivered. DotNetOpenMailProvider provider = new DotNetOpenMailProvider();NameValueCollection configValue = new NameValueCollection();configValue["smtpServer"] = "";configValue["port"] = "8081";provider.Initialize("providerTest", configValue);TestSmtpServer receivingServer = new TestSmtpServer();try{    receivingServer.Start("", 8081);    provider.Send(,       ,                 "Subject to nothing",                 "Mr. Watson. Come here. I need you.");}finally{    receivingServer.Stop();}// So Did It Work?Assert.AreEqual(1, receivingServer.Inbox.Count);ReceivedEmailMessage received = receivingServer.Inbox[0];Assert.AreEqual("", received.ToAddress.Email);   I can identify a few places that could cause this test to fail that would not cause our mock to fail otherwise. 1)      We did not properly setup our configValue["smtpServer"] configuration 2)      We did not properly setup our configValue["port"] configuration 3)      Someone is already listening on our configured port 4)      We did not properly start our testing server  or it is failing in some way (i.e. configuration) 5)      We have a firewall that prevents the communication from working between the client and our email server 6)      There is no accessible network between the client and the server 7)      The Ethernet elves ate the address that the mail was sent to 8)&n[...]

String to Int []

Tue, 30 May 2006 01:56:00 GMT

Originally posted on: I apologize for the code not being indented , I am using word 2007 and no matter what I do it does not seem to want to indent properly (I have tried copy/pasting to notepad, you name it), if you know how to get this working please contact me. See picture of word version   update I manually indented all of it but I am still wanting to know how to avoid this   Everyone seems to enjoy performance posts so …. I saw this question in the advanced .net group and found it fairly interesting. I have a comma separated list of integer values what's the quickest way to turn this into a integer array.   Being the geek that I am I set off immediately to find out just how quick I could make an algorithm fly. That said let’s start out with the most obvious answer, we will split the string and call int.Parse() on the string elements. We can use this code to benchmark our other results against.   static int[] Split(string _Numbers) {   string[] pieces = _Numbers.Split(',');   int[] ret = new int[pieces.Length];   for (int i = 0; i < pieces.Length; i++) {     ret[i] = int.Parse(pieces[i]);   }   return ret; }   Next we will need some data to feed into the routine. I chose "1,2,3,4,5,6,7,8,9,10,121,1000,10000" as my testing data. Running through this data 1,000,000 times takes a total of 4.39 seconds on my system (in release mode). I think that we can do better than this!! Using an old trick I figured that I would just make the string a char [] and iterate through it taking my current number and subtracting its char code from ‘0’ (it just so happens that ‘9’ – ‘0’ = 9 how convenient! Applying this methodology leaves us with the following code. static int[] Iterate(string _Numbers, ref int _Count) {   int[] buffer = new int[4];   char[] chars = _Numbers.ToCharArray();   _Count = 0;    int holder = 0;   for (int i = 0; i < chars.Length; i++) {     if (chars[i] == ',') {       buffer[_Count] = holder;       holder = 0;       _Count++;       if (_Count == buffer.Length) {         int[] tmp = buffer;         buffer = new int[tmp.Length * 2];         Buffer.BlockCopy(tmp, 0, buffer, 0, tmp.Length * 4);       }     } else {       holder = holder * 10 + chars[i] - '0';     }   }   return buffer; }   This code also has some oddities involved with it since it does not know initially the size of the int [] that it needs to pass back. In order to support this, it grows it’s int [] as it needs to (by doubling). This can be an expensive operation so avoiding it is best. Also since it is doubling its array, it has a new parameter Count which it uses to return the total number of elements in the array it returns (it may return a 32 element array that only uses 18 elements). As for performance, the exact code above will handle the same data as our first test in <> .5 seconds on my machine with a buffer size of 4. Not bad 10% of our first try! To show the importance of the buffer though, if we make the initial size 16 the code finishes in .4 seconds! There are still some areas we can optimize though. Keep in mind that this code is creating 1,000,000 char []. This is a pretty expensive operation, by using unsafe code we can avoid doing this. Here is the code static unsafe int[] Unsaf[...]

Floating Point Fun

Tue, 30 May 2006 16:35:00 GMT

Originally posted on: am sure by now that most know how floating point approximations work on a computer.. They can be quite interesting. This has to to be the weirdest experience I have ever had with them though Open a new console application in .NET 2.0 (set to build in release mode /debug:pdbonly should be the default) it is important for me to note that all of this code runs fine in 1.x. Paste the following code into your main function float f = 97.09f; int tmp = (int) (f * 100.0f); Console.WriteLine(tmp);   Output: 9708 Interesting eh? It gets more interesting! float f = 97.09f; float tmp = f * 100.0f; Console.WriteLine(tmp);   Output: 9709 This is very interesting when taken in context with the operation above. Let’s stop for a minute and think about what we said should happen. We told it to take f and multiply It by 100.0 storing the intermediate result as a floating point, and to then take that floating point and convert it to an integer. When we run the second example, we can see that if we do the operation as a floating point, it comes out correctly. So where is the disconnect? Let’s try to explicitly tell the compiler what we want to do. float f = 97.09f; f = (f * 100f); int tmp = (int)f; Console.WriteLine(tmp);   Output: 9709 (with a debugger attached, 9708 without in release mode!!) DEBUG:PDBONLY (even with no debug information through advanced settings) Wow this has become REALLY interesting. What on earth happened here? Let’s look at some IL to get a better idea of what’s going on here. .locals init ( [0] float32 single1, [1] float32 single2) L_0000: ldc.r4 97.09 L_0005: stloc.0 L_0006: ldloc.0 L_0007: ldc.r4 100 L_000c: mul L_000d: stloc.1 L_000e: ldloc.1 L_000f: call void [mscorlib]System.Console::WriteLine(float32) L_0014: ret   This is our floating point example that prints the correct value (as a float)   .locals init ( [0] float32 single1, [1] int32 num1) L_0000: ldc.r4 97.09 L_0005: stloc.0 L_0006: ldloc.0 L_0007: ldc.r4 100 L_000c: mul L_000d: conv.i4 L_000e: stloc.1 L_000f: ldloc.1 L_0010: call void [mscorlib]System.Console::WriteLine(int32) L_0015: ret   This is our floating point example that came out wrong above   .locals init ( [0] float32 single1, [1] int32 num1) L_0000: ldc.r4 97.09 L_0005: stloc.0 L_0006: ldloc.0 L_0007: ldc.r4 100 L_000c: mul L_000d: stloc.0 L_000e: ldloc.0 L_000f: conv.i4 L_0010: stloc.1 L_0011: ldloc.1 L_0012: call void [mscorlib]System.Console::WriteLine(int32) L_0017: ret   This is our floating point example that gets it right when debugger is attached but not without   Interesting, the only significant difference between the one that never works and the one that does but only in when a debugger is attached is that the one that does work stores and then loads our value back onto the stack before issuing the conv.i4 on the value. L_000c: mul L_000d: stloc.0 L_000e: ldloc.0 L_000f: conv.i4   Basically these instructions are telling it to take the result from the multiplication (pop it off of the stack) and store them back into location0 which is our floating point variable. It then says to take that floating point variable and push it onto the stack so it can be used for the cast operation. This is probably something that should be handled for us (by the C# compiler) in the case of our first example so that it works as well as the 3rd example. The “debugger/no debugger” problem is still our big problem though. The fact that JIT optimizations are changing behavior of identical IL is frankly kind of scary. My initial thought upon seeing the changes we just identified was that the operation was being optimized away by the JIT (storing and loading the same value on the stack s[...]

AOP with Generics Thoughts

Mon, 29 May 2006 17:02:00 GMT

Originally posted on:

As some of you may have realized, I am in the process of re-implementing my AOP framework to fully support generics right now (figured I might as well as I am white boarding it for open source deployment anyways). I have come across numerous issues in dealing with generics. Today I sent an email to the castle project group (who are going through a similar task in supporting generics in Dynamic Proxy). I figured I would post that email here as well in case others have thoughts on some of the issues I present.

I am going to dump out some of my experiences here, I have already shared some of these with hammett off list but they may help in the design of the next version of dynamic proxy (and definitely bring up discussion points).
The goal of truly supporting generics is the ability to reuse the generic proxy. This goal is not easily realized, I am beginning to question whether or not it is even worthwhile to create generic proxies.
In order to have a functional generic dynamic proxy system I need to support generic aware point cuts.
Foo.SomeMethod .. applies to all proxy instances
Foo.SomeMethod .. only applies to proxy instances where T=SomeClass
Foo.SomeMethod where T is ISomething .. dynamic application where T implements ISomething
This makes generation of generic proxies nasty at best. If we want to build a reusable proxy we have to build a superset of all defined behavior for any derived versions then conditionally not do anything at those interception points. We can move this behavior out to our interceptor cache ( i.e. simply pass a better context and return null representing no action to be taken) but this is still placing code into our proxies that we know for a fact will never be used.
an after interceptor defined only on Foo must be placed on Foo and checked in an if for all other classes .. This would allow us to reuse the generic proxy but has a trade off of performance for the other proxies.
This problem becomes compounded when dealing with mixins as someone could define a mixin only to apply to Foo and not to Foo. As such our previous solution of managing this in an interceptor cache becomes invalid so we are forced to create separate proxies for closed types in many cases. This will alleviate the problem of having garbage interceptor code but now we are also losing the ability to reuse our generic proxy.
Just some food for thought :)


ICollection and ICollection<>

Wed, 10 May 2006 02:19:00 GMT

Originally posted on:

Eralier I posted (and deleted) about the Queue class not implementing ICollection from some research this is by design.


"Some collections that limit access to their elements, like the Queue class and the Stack class, directly implement the ICollection interface."

If you look, the interfaces are also very different from each other in what they include. The generic one includes methods such as ... Add, Remove, and Clear which do not exist on the non-generic ICollection.

So conclusion .. Queue is correct but I have to say that having the generic and non-generic ICollections that represent completely different things is a bit confusing at best :-/


Sorting Performance .NET 2.0

Sun, 28 May 2006 04:17:00 GMT

Originally posted on:

Be very careful when using Array.Sort in 2.0. I had posted a bug report about this a while ago

The behavior observed was originally in arraylist (which uses Array.Sort internally) so keep in mind that this applies to it as well.

Array.Sort(Items, 0, Items.Length, Comparer.Default); //takes 1 minute
Array.Sort(Items2, 0, Items.Length, null); //takes 250 ms

Items and Items2 are both clones of the same object []

What is tricky about this code is that the second call does not actually call Array.Sort .. it calls Array .Sort

First call:

IL_0083: ldsfld class [mscorlib]System.Collections.Comparer [mscorlib]System.Collections.Comparer::Default
IL_0088: call void [mscorlib]System.Array::Sort(class [mscorlib]System.Array, int32, int32, class [mscorlib]System.Collections.IComparer)

Second Call:

IL_00c9: ldnull
IL_00ca: call void [mscorlib]System.Array::Sort (!!0[], int32, int32, class [mscorlib]System.Collections.Generic.IComparer`1)

Ah .. :)

Basically what the issue is is that there are 2 distinct sorting algorithms .. one that is Array.Sort one that is Array.Sort .. they do not use the same algorithm. From what I understand some changes were made in how pivots are chosen.


Blogging from Word2007

Mon, 29 May 2006 02:17:00 GMT

Originally posted on:

Hoping that this works…


Dynamic Proxy Quandries

Tue, 24 Jan 2006 17:25:00 GMT

Originally posted on: the creation of my dynamic proxy I ran into some “interesting” fringe conditions.... I started off as all do needing a very simple interceptor generator, this is btw very easily done if anyone is thinking about attempting it. I later decided to add mixin support which was a bit more interesting. That said lets get into some background information to help explain the issues. Mixins for those who are not aware involves the dynamic aggregation (either at compile, link, or runtime) of multiple objects. When I first did mixins I only supported interface/implementor pairs (I'll explain why after the example). Here's a basic hand done example of what occurs when we are implementing an interface/implementor pair. public class BasicClass { public virtual int Method() { Console.WriteLine("BasicClass:Method"); return -1; } public BasicClass() {} }   public interface IBar {     void Go(); } Our implementer public class BarImplementer : IBar {     public void Go() {           Console.WriteLine(“BarImplementer::Go”);     } }   and finally the aggregation class that would be generated to show the “mixin“ behavior     public class BasicClassWithIBarProxy : BasicClass, IBar //inherit from our subject and add interface  { private BasicClass m_Subject; //encapulate our subject private BarImplementer m_Bar; //encapsulate our implementer //override method and broker the call to our subject. public override int Method() { Console.WriteLine("BasicClassProxy:Method"); return m_Subject.Method(); } public void Go() { //needed for IBar                     m_Implementer.Go();        } //allow our subject to be given to us upon construction public BasicClassProxy(BasicClass _Subject, BarImplementer _BarImplementer) { m_Subject = _Subject; } }   To make this generic one could say that my dynamic proxy generation went through the following steps     Iteratre through interface implementor pairs Add interface to class declaration add parameter to constructor encapsulate the class iterate through the methods of the interface adding method redirection to encapsulated implementor iterate through each event creating a quick handler that bubbles the event   As one can see the generated class is simply a proxy of BasicClass that also aggregates IBar passing the method calls off to an implementor object who knows how to handle the calls received for IBar. You will notice that in this case I am not passing context, I did this for simplicity of the example the real server does pass context information to the implementor. This is a wonderful way of performing my aggregations but as I knew would happen I ran into a case where the code that I needed to aggregate was not mine nor did it support an interface. I at this point decided to support multiple base class base aggregation.   I am quite sure the astute reader just thought “wait this is a single inheritance based environment“. Well actually the very astute reader probably knows that there are numerous simulated multiple inheritance patterns available :) I chose to use David Esparza-Guerrero's pattern listed here to do my simulated multiple inheritance. To make a long read short (though I recommend reading it)... it uses implicit operations to allow for the simulation. Note that this aggregation based method only supports public contract MI not true MI within the object itself (although with a bit of hacking with reflections to avoid scope issues ... actually just no ... don't do that :))   Using such a meth[...]

Dynamic Proxy and Attributes

Tue, 24 Jan 2006 18:01:00 GMT

Originally posted on:

Sorry for spamming but I have had a few posts stuck in my head for a few weeks and I am sitting at my desk in a position I have resigned waiting out my time. Obviously I will not be receiving any new projects and all old projects are on wait of other resources = Bored Greg.


Continuing with the dynamic proxy I created a DSL for runtime based aspect assignment.

One thing that I did which was a bit different than anything I had seen done before was that I allowed attributes to map aspects.


In the assignment language you can say

Assign to Attribute


Assign to Method Level Attribute


Assign to Class Level Attribute

The first statement will assign an aspect to any method that has the attribute defined at either it's method level or at the class level that contains it.

The second statement will assign an aspect to any method that specifically declares the attributte at it's method level

The last statement will assign an aspect to any method that has the attribute declared at the class level.

One can also add an optional Having clause where you can access the public properties of the attribute


i.e. Having RoleType=“Admin“ and UseSecurity=true

This simply allows you to filter the assignment.

This addition was invaluable to my dynamic proxy and IMHO really helps bridge the gap between aspect based code and attribute based code. The gives me the big pro of having a simple code based declarative method of defining a behavior and at the same time it keeps me from having to pay the penalty of going through reflections (and a special handling object) in order to process the attributes.

There are some pitfalls to this methodology, one of the largest is that since the metadata is defined as an attribute a re-compile is required to change it. I am at this point leaving it to the developer to decide when it is good to use the attribute and when it is bad :)

What is really nice about this is it allows me to define “Types“ of join points via the use of attributes. In other words this allows me to group method types which are similar points to simplify my aspect assignment (1 line instead of hundreds or thousands). It also allows me to go and refactor those places that were using attribute based programming to use aspects instead quite easily :D


Anyone else have any thoughts on this?




Decoroxy - Dynamic Decorator

Tue, 31 Jan 2006 17:20:00 GMT

Originally posted on: am leaning towards re-implementing my dynamic proxy as open source (BSD license of course). As with any open source project my biggest problem was not a shortage of ideas but in finding a really cool name that I could also have the domain for :) In this process I have come across something odd. In AOP Ido not actually use dynamic proxies! I use Dynamic Decorators.  Proxy Decorator     from The Proxy Pattern is similar in structure to the Decorator Pattern. Both describe how to provide a level of indirection to an object. Both keep a reference to another object to which requests are forwarded. The difference again is the intent. Like the Decorator, the Proxy composes an object and provides an identical interface to clients. Unlike the Decorator, the Proxy Does not attach and detach properties dynamically Is not designed for recursive composition. The intent of the Proxy is to stand in for a subject when it is inconvenient or undesirable to access the subject directly. Ie the subject may reside on a remote machine or have restricted access. In the Proxy pattern, the subject defines the key functionality and the proxy provides or refuses access to it. In the Decorator, the component provides a subset of the functionality and the decorators provide the rest. The Decorator addresses the situation where the objects totally functionality may need to be determined at run-time. The Proxy focuses on the relationship between the proxy and its subject which can be expressed statically. The differences between the Proxy and Decorator Pattern are significant but that does not mean they can not be combined. A Proxy-decorator might add functionality to a proxy A Decorator-proxy might embellish a remote object.   It sure seems to me that what we are actually using is a dynamic decorator not a dynamic proxy in the first place. I mean we are explicitly adding functionality to the object (aspects) at run time. I think the key item in the list is “The Decorator addresses the situation where the objects totally functionality may need to be determined at run-time.” This is absolutely the case when we are dealing with aspects. One major problem I see with using true decorators is the subject setter, I currently don't use it and I cannot in my mind come with anyway that I could possible use it. In order for me to be able to use an instance setter I would need to cast the proxy to another type in order to be able to access it. This in my mind removes the transparency of the proxy itself. I should never have code that does something along the lines of   Foo foo = new Foo(); IProxy fooproxy = ProxyFactory.CreateProxy(typeof(Foo)); fooproxy.SetInstance(foo);   because I am now readily aware that I am not dealing with the original type. I preferred instead to make my objects immutable as I never really saw them being reused anyways.   That being said, because I am not using an instance setter, am I actually using a decorator pattern?! One of the key points of the decorator pattern is the instance setter that allows for multiple subjects through out the lifespan of the instance. Am I actually generating some weird form of hybrid between a proxy and a decorater (a decoroxy perhaps) .   Anyways I was able to register the domain and .net so I am happy :)       I am considerring some other changes to the ways I was doing things as I move things to OS. One thought that I have been mulling over for quite some time now is that the dynamic aggregation part used in mi[...]