Subscribe: Frans Bouma's blog
http://weblogs.asp.net/fbouma/Rss.aspx
Added By: Feedage Forager Feedage Grade A rated
Language: English
Tags:
code  core  data  entity  framework  llblgen pro  llblgen  model  net core  net  new  orm  pro  query  sql  time  work 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Frans Bouma's blog

Frans Bouma's blog



The blog of Frans Bouma, creator and lead developer of LLBLGen Pro and ORM Profiler.



 



LLBLGen Pro v5.1 EAP 2 released!

Wed, 28 Sep 2016 12:50:13 GMT

Today we released the second ‘Early Access Program’ (EAP) build of LLBLGen Pro v5.1! Please see the previous post on the first EAP build and what’s included in that version. This version builds further on that foundation and includes new features (as well as all fixes made to EAP 1 and v5.0). Our EAP builds are ‘ready to release’ builds, with up to date documentation and are fully tested. Updated packages (marked ‘Alpha-20160928’) have been pushed to Nuget as well for the folks who prefer referencing nuget packages. What’s included in EAP 2? EAP 2 has the following new features, which are again all part of the LLBLGen Pro runtime framework. I’ll link to the documentation for details on the topics discussed below. Plain SQL API Using an ORM in general doesn’t confront you with plain SQL queries, often the APIs in ORMs don’t even allow you to specify a plain SQL query. This is OK for most situations but there are occasions where you might need to run a hand-optimized plain SQL query or have to execute a SQL query which can’t be generated by the ORM query API. Since the arrival of microORMs, which often have a plain SQL interface as their core query API, users of full ORM frameworks sometimes also use a microORM ‘on the side’ to perform these plain SQL queries when needed. While this might work, it also can be a bit of a problem as one can’t leverage aspects offered by the full ORM, like an active transaction or easy paging query generation. As we were already working on an addition to our POCO projection pipeline, we thought: why not open the runtime some more to be able to project plain SQL resultsets to POCOs as well? (and of course be able to execute SQL statements which don’t return a resultset). This is the new Plain SQL API. All methods have async/await variants of course. Executing a SQL statement Executing a SQL statement is meant to execute a non-resultset SQL query, e.g. to update or delete some rows. This is easy to do with the ExecuteSQL method. 1: Guid newId = Guid.Empty; 2: using(var adapter = new DataAccessAdapter()) 3: { 4: var q = @"SELECT @p0=NEWID();INSERT INTO MyTable (ID, Name) VALUES (@p0, @p1);"; 5: var idVar = new ParameterValue(ParameterDirection.InputOutput, dbType: DbType.Guid); 6: var result = adapter.ExecuteSQL(q, idVar, "NameValue"); 7: newId = (Guid)idVar.Value; 8: } The example above inserts a row into MyTable and returns the generated NEWID() Guid through an output parameter. It passes the value ‘NameValue’ as parameter value to the query as well. There’s a flexible way to specify parameters with just the values as arguments of ExecuteSQL and if needed, you can define a ParameterValue instance with parameter specifics, like direction, DbType, length, precision, scale etc. This is also the mechanism used to obtain the output parameter value after the query has been completed. Fetching a resultset Of course you can also fetch a resultset and project to POCO classes using the Plain SQL API. This is done using the FetchQuery method. See the following example: 1: List result = null; 2: using(var adapter = new DataAccessAdapter()) 3: { 4: result = adapter.FetchQuery( 5: "SELECT * FROM Customers WHERE Country IN (@p0, @p1) ORDER BY CustomerID DESC", 6: "USA", "Germany"); 7: } In the example above a query on the Customers table with a WHERE clause using two parameter values is projected onto the POCO class Customer. This is all nice and works great, but as the API is part of a full ORM, there’s more: we can leverage systems in the full framework. Here’s the same example again, but this time it utilizes the Resultset caching system in LLBLGen Pro and it also offers paging query creation for a specific offset / limit combination: 1: List result = null; 2: using(var adapter = new DataAccessAdapter()) 3: { 4: // fetch first 5 rows, cache resultset for 10 seconds. 5: result = adapter.FetchQuery&[...]



LLBLGen Pro v5.1 EAP1 released!

Tue, 30 Aug 2016 13:51:24 GMT

Today we released our first ‘Early Access Program’ build for LLBLGen Pro v5.1! When we moved to subscriptions (with perpetual licenses) when we released v5.0, the one thing I wanted to get rid of was the long delays between versions: no more 1.5-2 years of development to a massive release, but smaller releases which are given to the users quickly. So here is the first release of that. Additionally, we did a lot of work to make it release-ready. This means that the EAP build is a release like any other final release, including up-to-date documentation and fully tested. This is another big step for us, so we can switch an EAP build to ‘RTM’ at any time. In the coming months we’ll release more builds till we reach RTM. What’s included? This first EAP release contains the following features. We’re focusing on our state of the art ORM framework this time around for most of the features planned for v5.1 RTM. Temporal (history) table support (SQL Server 2016 / DB2 10) For select / fetch queries, the LLBLGen Pro runtime framework now supports temporal tables. A temporal table is a table with a coupled history table which is managed by the RDBMS: an update or delete of a row will copy the original row to the history table with two date markers to signal the period in which this row was valid. For more information about temporal tables in e.g. SQL Server, see this article. Temporal tables offer a transparent way to work with history data. This means you can now use temporal table predicates directly in Linq and QuerySpec (our fluent query API) queries to query on the current data but also on history data. See this example: 1: var q = from e in metaData.Employee 2: .ForSystemTime("BETWEEN {0} AND {1}", 3: fromDate, toDate) 4: where e.EmployeeId == 1 5: select e; Here a Linq query is defined which will query for the employee data of the employee with id ‘1’, and all rows valid between fromDate and toDate are included. This means that if the employee data of this particular employee was updated between these two dates, the original data which was updated will be included in the resultset as well. On IBM DB2, LLBLGen Pro also supports Business Time temporal table predicates, something which isn’t supported by SQL Server 2016. Table / View hints (SQL Server) and Index hints (MySQL) To specify a hint for the RDBMS query optimizer has been a requested feature for a long time, but I never found a proper way to make it easy to specify. With the temporal table support, the same mechanism can be used for specifying hints for table / views, in the case of SQL Server, and indexes, in the case of MySQL. All other databases which support hints (Oracle and DB2 come to mind) aren’t supported here, as they force the hints to be present as comments in the projection of the SELECT statement, and the hint system works with hints specified on elements in the FROM clause. This isn’t that bad however, as hints on Oracle and DB2 are heavily discouraged by the vendors of these databases so it’s unlikely we’ll add support for these particular hints later. To specify a table / view hint in a Linq query (or QuerySpec query), you simply call an extension method with the hint as argument as shown in the following example (SQL Server): 1: var q = from c in metaData.Customer 2: .WithHint("NOLOCK") 3: .WithHint("FORCESEEK") 4: join o in metaData.Order on c.CustomerId equals o.CustomerId 5: where o.EmployeeId > 4 6: select c; Here the target mapped by the entity ‘Customer’ will receive two hints specified in the SQL Query, namely ‘NOLOCK’ and ‘FORCESEEK’. The target mapped by the entity ‘Order’ doesn’t receive these hints. Hints are a last resort to optimize a query and in general they should be left alone, but in cases where it can help a great deal, it’s good that an ORM Fram[...]



The .NET support black hole

Thu, 23 Jun 2016 12:49:56 GMT

Today I ran into a bit of an issue. A work-item for LLBLGen Pro v5.1 is to support all the new features of SQL Server 2016. One of the features of SQL Server 2016 is ‘Always Encrypted’. You can enable this feature through the connection string, and after that all data-access is encrypted, no further coding needed. As this is a connection string setting, it’s enabled in every ORM out there out of the box, also in ours. That’s of course not the problem. The problem is adding more control over this feature to the developer writing code which targets SQL Server 2016. Starting in .NET 4.6, the SqlClient API offers a way to specify when and when not to encrypt using SQL Server 2016. This is done through a setting in SqlCommand’s constructor: you can specify a SqlCommandColumnEncryptionSetting value which gives you control over when to encrypt and when not to encrypt, which could greatly enhance performance if you just partly encrypt your catalog. There’s something odd though: Although SqlCommand has a property for this, ColumnEncryptionSetting, it’s read-only. Its backing variable is set through the constructor. Now, why is this a problem, you ask? Well, unless your code is creating SqlCommand instances directly, you can’t set the setting for a given SqlCommand instance: if you use DbProviderFactory, and most ORMs do, or if you use CreateCommand() on the connection object, you can’t set the setting. You can only set it if you directly use the constructor. Any database-generic code out there uses either DbProviderFactory or the CreateCommand() method on the connection object, and thus can’t use this feature. The problem Looking at this, somewhat terribly designed SqlCommand API, I wondered: “Ok, there’s a serious problem with this API, where can I give feedback so they can fix it?”. I couldn’t answer that. With a product you purchase from a vendor, you can do to that vendor’s support channel, ask them what they think should be done to get this resolved and you eventually reach a conclusion, but here, I have literally no idea. With .NET Core, most development regarding .NET within Microsoft is focused on that, and a lot of airtime is given to that on GitHub, blogposts etc. But .NET full, e.g. v4.6.2, where do you go with an issue like this? Connect? Mail someone within Microsoft, hoping they’ll route it to some person who won’t delete it right away and look at it? About Connect I’ll be short: no way in hell am I going to spent another second of my time on this planet in that crappy system. Not only does the .NET team not reply to any issues there, I still have some open issues there which are years old and no-one bothers to answer them. It’s like writing the problem into some text file and never look at it again, same result. About emailing someone within Microsoft: that might work, but it also might not. I happen to know some people within Microsoft and I’m sure they’ll at least read the email, but it’s a silly way to give feedback: here we have a mega-corporation which makes billions of dollars each month, says to be a developer focused company and you have to email your question to some employee and hope to get answers? How fucked up is that! Now, my technical issue with SqlCommand is not something everyone will run into. That’s also not the point. The point is: if there’s an issue with the .NET BCL/API, there should be a clear path to the teams working on this codebase to give them feedback, report issues and get things fixed. Today there’s none. Oh of course, if things related to .NET Core pop up, we can open an issue on GitHub and start the conversation there, but this isn’t related to .NET Core: SqlClient on .NET core doesn’t contain any encryption related code (as far as I can tell, .NET Core’s SqlCommand lacks code related to encryption which is in the SqlCommand class in the reference source), so this issue is out of scope for .NET Core. (As an aside, the SqlClient team isn’t really responsive on .NET C[...]



LLBLGen Pro v5.0 RTM has been released!

Tue, 26 Apr 2016 13:32:38 GMT

After 1235 commits and 20 months of full time development, the work is done and we’ve released LLBLGen Pro v5.0! To see what’s new, please go to the official ‘New features’ page on our website, or read about them on this blog in the Beta announcement post.

I’m very happy with this release and how it turned out. I’m confident you’ll be too. (image) We have much more planned in the upcoming releases, so stay tuned!

To all the beta-testers, support team and others who have helped us to make it the best release we’ve ever done: Thank you!




Introducing DocNet, a static documentation site generator

Wed, 10 Feb 2016 14:27:48 GMT

Update: Since release, I’ve updated the repository with a github site with Docnet’s own documentation: http://fransbouma.github.io/DocNet/ It can be used as a showcase how things look in the output and also is now the main documentation site for Docnet. Also I’ve added Tabs support as a Markdown extension and fixes to the Theme! Original post: I wrote another generator! This time it’s for generating a static site for user documentation from markdown files. It’s called DocNet and it’s Open Source: https://github.com/FransBouma/DocNet. It took me a good 7 days to build it. We at Solutions Design use it to create our user documentation for the upcoming LLBLGen Pro v5.0 release. The documentation for the designer is currently being worked on and gives a good overview how the output of DocNet looks like: LLBLGen Pro v5.0 designer documentation generated by DocNet (Work in progress!) The github repo contains a readme with all the options and how to use it, so I won’t mention it here again. It’s not complete yet, but it already can produce output we can live with. If you want to add features and work on this too, please feel free to provide a PR. “Why?” LLBLGen Pro exists now for 13 years. In those years the documentation has been shipped in .chm files and in recent years also as a static website generated from the same html files used to create the .chm file. If you’re not familiar with .chm files, they’re compacted archives, built with a compiler from Microsoft which hasn’t been updated in … 10 years? and use IE to render the pages (IE6’s render engine that is). In short, the format is completely outdated. Nowadays, documentation is shipped with HTML5 content, often online only, as a normal website. Additionally, the documentation is more and more written in Markdown. We wanted that too with our (massive) documentation for LLBLGen Pro. We had one extra requirement: the site should be equal locally as well as online, so people who have no internet connection at a given moment (airplane, train, dead wifi) can still browse the help locally. So in short: Documentation has to be written in markdown Documentation site is browsable locally and online with the same content Existing documentation files have to be convertible to markdown Has to use standard HTML5 / CSS / JS output Has to look modern. For our class library reference documentation I use Sandcastle Help File Builder, which is a brilliant piece of software. It works, output is known, no surprises, so that’s covered. The user documentation however, is another story. There are many, many different tools on the market to produce user documentation for you, in a lot of different formats. There are two categories: 1) a big IDE which manages content / topics / structure and lets you edit the content per topic in some form of editor and 2) a tool which consumes a set of files and produces a bunch of html files which form your user documentation. The tools in category 1) all do their job fairly well: some produce uglier output than the other, but in general they all are capable to produce user documentation that’s readable and usable. As we already have a massive amount of documentation in html files (which are hand-written in the vs.net html editor), the tools in this category didn’t really fit. This isn’t their fault, but simply our situation. The tools in category 2) are most of the time either in python, javascript or … php! Yes I was surprised too. The quality of these tools is often ‘ok’ but also need some work on your part. The tool which actually produced decent output was MkDocs. MkDocs is the tool that I had chosen at first but its local site doesn’t work at all, and its ReadTheDocs theme (the one worth using) has ‘always expanded ToC entries’ which isn’t usable if you have many topics like we do. So in the end, I thought: “what can I build in a couple of days?” After all, it’s markdown in, htm[...]



LLBLGen Pro v5.0 Beta has been released!

Mon, 01 Feb 2016 12:26:25 GMT

Since the first commit into the v5.0 fork back in the Fall 2014, we’ve been hard at work to make LLBLGen Pro v5.0 a worthy successor of the highly successful v4.x version. Today, we’ve released our hard work in beta, feature complete form: LLBLGen Pro v5.0 beta is now available.

For the features which are new, I’d like to refer to the two posts I did on the two CTPs we released: CTP1 features and CTP2 features, with one exception: the Home tab we added in CTP1 has been reworked into a different Home tab, as shown below.

(image)

The full change log since CTP2 can be found here.

Besides the ‘headliner features’ like the central relational model data synchronization and the derived model support for DTO and Document Databases, what I’m particularly pleased with is the tremendous amount of small things we managed to add: from simple things like defining a default database type for a .NET type when you do model-first development (so you can easily control if e.g. a .NET DateTime typed entity field maps to a Date DB type, Time DB type, or other) to things you’d expect like wiring up the references of multiple generated VS.NET projects automatically, or the 30%-40% faster linq/queryspec projection fetches in our runtime framework, making it faster than a lot of the well-known microORMs.

If you’re a v4.x licensee of LLBLGen Pro, the beta is available to you and can be downloaded from the ‘My Account’ page on our website.




Raw .NET Data Access / ORM Fetch benchmarks of 16-dec-2015

Wed, 16 Dec 2015 11:03:00 GMT

It’s been a while and I said before I wouldn’t post anything again regarding data-access benchmarks, but people have convinced me to continue with this as it has value and ignore the haters. So! Here we are. I expect you to read / know the disclaimer and understand what this benchmark is solely about (and thus also what it’s not about) in the post above. The RawDataAccessBencher code has been updated a couple of times since I posted the last time, and it’s more refined now, with better reporting, more ORMs and more features, like eager loading. The latest results can be found here. A couple of things of note, in random order: Entity Framework 7 RC1 (which we used here), is slow, but later builds are faster. It’s still not going to top any chart, but it’s currently faster than EF6, according to tests with a local build. We’ll update the benchmark with results from RC2 when it’s released. LLBLGen Pro v5.0, which is close to beta, has made a step forward with respect to performance, compared to the current version, v4.2. I’ve optimized in particular the non-change tracking projections as there was some room for improvement without cutting corners with respect to features. The results shown are achieved without generating any IL manually. The performance is better than I’d ever hoped to achieve, so I’m very pleased with the result. The NHibernate eager load results are likely sub-optimal, looking at the queries, however I couldn’t find a way to define a more optimal query in their (non-existing) docs. If someone has a way to create a more optimal query, please post a PR on GitHub The DNX build of the benchmark currently doesn’t seem to work, at least I can’t get it to start. This is likely due to the fact it was written for Beta8 and current bits are on RC1 and tooling changed a lot. As their tooling will change again before RTM, I’ll leave it at this for now and will look at it when DNX RTMs. The eager loading uses a 3-node graph: SalesOrderHeader (parent) and two related elements: Customer (m:1, so each SalesOrderHeader has one related Customer) and SalesOrderDetail (1:n). The graph is a graph with 2 edges which means frameworks using joins will run in a bit of a disadvantage, as the shortcoming of that approach is brought to light. The eager load benchmark fetches 1000 parents.  The eager loading only benches change tracking fetches and only on full ORMs. I am aware that e.g. Dapper has a feature to materialize related elements using a joined set, however it would require pre-defining the query on the related elements, which is actually a job the ORM should do, hence I decided not to do this for now. Perhaps in the future.  The new speed king seems to be Linq to DB, it’s very close to the hand-written materializer, which is a big achievement. I have no idea how it stacks up against the other micros in terms of features however. (Update) I almost forgot to show an interesting graph, which is taken with dotMemory profiler from Jetbrains during a separate run of the benchmarks (so not the one taking the results as profiling slows things down). It clearly shows Entity Framework 7 RC1 has a serious memory leak: (update) As some people can’t view pastebin links, I’ve included all results (also from the past) as local files to the github repository. [...]



What’s new in LLBLGen Pro v5.0 CTP 2

Fri, 30 Oct 2015 11:01:55 GMT

We’ve released the second CTP for LLBLGen Pro v5.0! Since the first CTP which was released back in March, we’ve been hard at work to implement features we wanted for v5.0. It’s taken a bit longer than expected as the main feature, Derived Models (more on that below), turned out to be a bigger feature than we initially thought and it affected more aspects of the designer than anticipated at first. Nevertheless, I’m very happy with the result, as it turned out even better than I imagined. This CTP is the last one before beta and RTM. It’s open for all V4 licensees and comes with its own temporary license which expires on December 31st, 2015. Beta and RTM are expected in Q1 2016. To download, log into our website and go to ‘My Account->Downloads->4.2->Betas’. So what’s new since CTP 1? I’ve described the highlights below. Of course the usual pack of bug fixes and improvements across the board are also included in this CTP. Derived Models Ever since v4 I wanted to do more with the designer, to do more with the entity model a user defines using it: we have all this model information contained in a single place and there should be more ways to leverage that than just for a mapping model to use with an ORM. The idea started when I looked into usage patterns again where ORM models are used. Nowadays most people define a different model in their application which is used to transport data from the ORM to the UI, e.g. across service boundaries or inside an MVC model class. This model is often hand-written, and tied to the entity classes using hand-written queries or Automapper mapping files. In other words: a hand-written chain of dependencies stored in multiple places which can break at any moment without noticing. Another use case of models defined on top of an entity model is with document databases. Web applications nowadays aren’t just monolithic stacks with one database and one runtime, they’re built as a group of technologies working together in harmony: a UI on the client in JS, a backend targeting multiple databases using multiple paradigms, etc. It’s more and more common to store non-volatile data in de-normalized form inside a document database for faster querying, and update that data on a regular basis, at runtime. This data is e.g. retrieved from the relational database which is used with the ORM and entity classes. For these two scenarios I’ve developed the Derived Models feature: Derived Models are models of hierarchical elements defined on top of the entity model. They define an optionally de-normalized form of the underlying abstract entity model. This all might sound complicated, but it’s actually very straightforward. Let’s see a picture of the editor to get started. (click the picture below for a bigger version) For the picture, I’ve picked a complicated scenario as it illustrates most of what you can do with the editor. On the far left of the designer you see the Project Explorer which is the overview of the elements in your project. The project loaded has an entity model with entities in various inheritance hierarchies, and a Derived Model called Docs. The Derived Model uses the target framework Document Database. We’ll get to that in a minute. The editor shown in the screenshot shows the derived element BoardMember, which is derived from the entity BoardMember. On the left of the editor you see the entity hierarchy in tree form: you can navigate from the root entity BoardMember to related entities infinitely. On the right you’ll see the shape of the derived element BoardMember. When you check a checkbox on the left, it’s included in the shape on the right. That’s of course not all. You can also de-normalize fields. In the example above, you’ll see in the shape of the derived element a de-normalized field, namely BoardMember.ManagesDepartmentName. This field is [...]



LLBLGen Pro v5.0 CTP 1 released!

Tue, 17 Mar 2015 15:20:11 GMT

We’ve released LLBLGen Pro  v5.0 CTP 1! It’s a Community Technical Preview (CTP) of the upcoming v5.0 which is in development since fall 2014. The CTP is open for all v4.x customers (it’s in the customer area, in the v4.2, betas section) and comes with a time-limited license which expires on June 1st, 2015. As this isn’t a full beta, (much) more features are added before beta hits. Below I’d like to show some of the new features in action. Click the screenshots for a bigger version. New, skinnable UI A new UI was long overdue. The current (v4.2) version still uses VS.NET 2008 like toolbars/controls and it looks just… dated. So, we ported the complete UI to DevExpress controls (still winforms though) as some of our controls were already based on those. The screenshots below show the default VS.NET 2013 white theme. New Relational Model Data Sync System In short: the Sync system. Sync replaces both database first related actions like refresh catalog and model first related actions like auto-map and adjust relational model data. It allows a sync source to be set for a schema which controls the source from where table related relational model data is obtained from: the database or the entity model. Stored procedures/views/tvfs are always obtained from the database. Everything is managed from a single tab, the Sync Relational Model Data tab, which is opened by clicking the sync button on the toolbar or menu item in the project menu. A big benefit from the new system is that it will function even when the project contains errors: it's no longer necessary to correct project elements before a refresh. It also doesn't adjust relational model data on database synced schemas, so it's no longer required to export DDL SQL before code generation because the validation adjusted some fields based on a change. Revamped Home tab The Home tab now shows active tiles the user can click to navigate through one of the scenarios (database first / model first and new / existing project). Which tiles are shown depends on what the user did last and the state of the designer. It's an easy way to get started with the designer and replaces the webpage based home tab which was static. It acts like a wizard in disguise: you can do the basic tasks right from the home tab, by clicking tiles. Here I’ve opened a Database first project and the state of the designer now shows different tiles with different actions. Search in Project Explorer / Catalog Explorer Directly available in the Project Explorer and Catalog Explorer are the search boxes: type in any string and the nodes matching the string are kept, all other nodes are filtered out. This allows finding elements with ease. Removing the search replaces the tree as it was before the search. In the below screenshot I’ve searched for ‘Sales’ in both the project explorer and the catalog explorer. It shows all nodes matching the string specified, including parent nodes (to give context) and hiding other nodes. The more advanced, LINQ query based search is still available in the designer, but this dedicated search on the two explorer panes is easier to use and requires no extra forms to navigate. Real Time Validation The designer got a real time system to schedule and run tasks at will through several dedicated dispatch queues. This greatly helps offload work from the UI thread while not having to mess with multi-threading as it utilizes the .NET 4.5 Task Parallel Library. Configuring work is as easy as defining an event, a handler and which dispatch queue to run the handler call and the system takes care of the rest, including overflow protection (so only a limited number of calls are allowed per interval). The real-time validation in action:   So these are the highlights of this CTP (there are many tiny improvements under the hood too). We’ve much[...]



LLBLGen Pro Runtime Libraries and ORM Profiler interceptors are now available on nuget

Tue, 10 Feb 2015 16:33:58 GMT

I caved. For years I’ve denied requests from customers to publish the LLBLGen Pro runtime framework assemblies on nuget, for the reason that if we had to introduce an emergency fix in the runtimes which also required template changes, people with dependencies on the nuget packages would have a problem. While this might be true in theory, in practice it’s so uncommon that this will happen, it more and more turned into an excuse. Add to that that customers started publishing the runtimes themselves on nuget, it was time to bite the bullet and publish the runtimes ourselves, officially. So we did. At the same time we published the interceptor assemblies of ORM Profiler on nuget. The URLs For LLBLGen Pro: https://www.nuget.org/packages/SD.LLBLGen.Pro.ORMSupportClasses/https://www.nuget.org/packages/SD.LLBLGen.Pro.ORMSupportClasses.Web/https://www.nuget.org/packages/SD.LLBLGen.Pro.DQE.Access/https://www.nuget.org/packages/SD.LLBLGen.Pro.DQE.DB2/https://www.nuget.org/packages/SD.LLBLGen.Pro.DQE.Firebird/https://www.nuget.org/packages/SD.LLBLGen.Pro.DQE.MySql/https://www.nuget.org/packages/SD.LLBLGen.Pro.DQE.OracleMS/https://www.nuget.org/packages/SD.LLBLGen.Pro.DQE.OracleODPNET/https://www.nuget.org/packages/SD.LLBLGen.Pro.DQE.PostgreSql/https://www.nuget.org/packages/SD.LLBLGen.Pro.DQE.SqlServer/https://www.nuget.org/packages/SD.LLBLGen.Pro.DQE.SybaseAsa/https://www.nuget.org/packages/SD.LLBLGen.Pro.DQE.SybaseAse/ For ORM Profiler: https://www.nuget.org/packages/SD.Tools.OrmProfiler.Interceptor.EFv6/https://www.nuget.org/packages/SD.Tools.OrmProfiler.Interceptor.NET45/https://www.nuget.org/packages/SD.Tools.OrmProfiler.Interceptor/ Who are these assemblies for? The assemblies are for customers who want to stay up to date with the latest runtimes. Every time we publish a new build, the runtimes and interceptor dlls are automatically updated with the latest build. We never introduce breaking changes in released assemblies, so they’re safe to use in code and update to the latest version. How are they versioned? The LLBLGen Pro Runtime Framework assemblies are versioned as: 4.2.yyyymmdd, where yyymmdd is the build-date. The ORM Profiler interceptors are versioned as: 1.5.yyyymmdd. What’s in the packages? The DQE packages come with a single DLL, the DQE dll, and have a dependency on the ORMSupportClasses package. The ORMSupportClasses package contains both the .NET 3.5 build and the .NET 4.5 build with async support: if your project targets .NET 4.5 you automatically will reference the .NET 4.5 build with async support. The Interceptor packages contain the interceptor dll and support dlls which don’t need their separate package. The Entity Framework interceptor has a dependency on Entity Framework 6. Do the DQE packages depend on ADO.NET provider packages? No, as all DQEs work with the DbProviderFactory system and don’t need the using project to reference an ADO.NET provider reference. The ADO.NET provider has to be present on the system, but as the provider assembly doesn’t need to be referenced by the VS.NET project the DQE package doesn’t need a direct dependency on the related ADO.NET provider package as that would mean the ADO.NET provider dll would be directly referenced after the DQE package has been installed. Hope this helps the customers out who have asked us for so long for this feature [...]



“.NET Core is the future”, but whose future is that?

Tue, 09 Dec 2014 10:53:00 GMT

It’s likely you’ve heard about Microsoft’s release of the .NET Core source code, their announcement of ASP.NET vNext and accompanying PR talk. I’d like to point to two great articles first which analyze these bits without being under the influence of some sort of cool-aid: “.NET Core: Hype vs. Reality” by Chris Nahr and “.NET Core The Details - Is It Enough?” by Mike James. I don’t have a problem with the fact that the ASP.NET team wants to do something about the performance of ASP.NET today and the big pile of APIs they created during the past 12-13 years. However I do have a problem with the following: “We think of .NET Core as not being specific to either .NET Native nor ASP.NET 5 – the BCL and the runtimes are general purpose and designed to be modular. As such, it forms the foundation for all future .NET verticals.” The quote above is from Immo Landwerth’s post I linked above. The premise is very simple, yet has far reaching consequences: .NET core is the future of .NET. Search for ‘Future’ in the article and you’ll see more reference to this remark besides the aforementioned quote. Please pay extra attention to the last sentence: “As such, it forms the foundation for all future .NET verticals”. The article is written by a PM, a person who’s paid to write articles like this, so I can only assume what’s written there has been eyeballed by more than one person and can be assumed to be true. The simple question that popped up in my mind when I read about ‘.NET core is the future’, is: “if .NET core is the future of all .NET stacks, what is going to happen with .NET full and the APIs in .NET full?” Simple question, with a set of simple answers: Either .NET Core + new framework libs will get enough body and it will be simply called ‘.NET’ and what’s left is send off to bit heaven, so stuff that’s not ported to .NET core nor the new framework libs is simply ‘legacy’ and effectively dead. Or .NET Core + new framework libs will form a separate stack besides .NET full and will co-exist like there’s a stack for Store apps, for Phone etc. Of course there’s also the possibility that .NET core will follow the faith of Dynamic Data, Webforms, WCF Ria Services and WCF Data Services, to name a few of the many dead and burned frameworks and features originating from the ASP.NET team, but let’s ignore that for a second. For 3rd party developers like myself who provide class libraries and frameworks to be used in .NET apps, it’s crucial to know which one of the above answers will become reality: if .NET core + new framework libs is the future, sooner or later all 3rd party library developers have to port their code over and rule of thumb is: the sooner you do that, the better. If .NET core + new framework libs will form a separate stack, it’s an optional choice and therefore might not be a profitable one. After all the amount of people, time and money we can spend on porting code to ‘yet another platform/framework’, is rather limited if we compare it to a large corporation like Microsoft. Porting a large framework to .NET Core, how high is the price to pay? For my company, I develop an entity modeling system and O/R mapper for .NET: LLBLGen Pro. It’s a commercial toolkit that’s been on the market for over 12 years now, and I’ve seen my fair share of frameworks and systems come out of Microsoft which were positioned as essential for the .NET developer at that moment and crucial for the future.  .NET Core is the base for ASP.NET vNext and positioned to be the future of .NET and applications on .NET Core / ASP.NET vNext wil[...]



Greener grass

Tue, 23 Sep 2014 11:20:09 GMT

This morning I read the blog post 'Life with a .NET' by Jon Wear. It's about leaving .NET / the Microsoft platform for the great unknown 'outside the Microsoft world'-universe, and it's a great read. It made me reflect on my (rather secret) journey in that same universe outside everything Microsoft this summer. After I finished LLBLGen Pro v4.2 this summer, I fell into the usual 'post-project' dip, where everything feels 'meh' and uninteresting. Needless to say I was completely empty and after 12-13 years of doing nothing but .NET / C# / ORM development, I didn't see myself continuing on this path. However I also didn't see myself leaving it, for the obvious reason that it's the place where my life's work lives. Rock, meet Hard Place. This summer I took a journey to find back the love I once had in writing code, starting with Go and Linux, after that Objective-C, Mac OS X / Cocoa and coming back full circle on .NET with the Task Parallel Library (TPL). It's an unusual trip, but I didn't really know what I was looking for, what it was that I needed to be happy to write some code again, so I was open to anything. In my career I've learned (and luckily also forgotten) a lot of different programming languages and platforms. After 20 years of professionally using all those languages and platforms for short or longer periods of time I can conclude: they all are just a tool to get to your goal, they're not the actual goal themselves. I already knew this of course when I went into this journey, so learning Go was, in hindsight, more of a 'let's do this, see where it leads me' kind of thing than a real move to Go. After learning the language and working with the tools available I realized it wasn't the world I wanted to be in. The main reason was that I develop and sell tools for a living, I'm not a contractor and Go's commercial ecosystem is simply not really there. After my Go adventure I had learned a new language but nothing of what I needed to get past my problem. To learn a language and platform, it's best to use it in a real project. Some time ago I had an idea for an app for musicians (I'm an amateur guitarist) on OS X. This was the perfect opportunity to learn a new language and platform, so I did the radical move to learn Objective-C with XCode, targeting OS X. I have to say, this was a true struggle. XCode was 'OK', but Objective-C was something I hated from the start. Yeah, I know, Xamarin etc., but I didn't want to use that, it would still be C# and I did that already all day long. I know Apple has released a new language (Swift), but at the time I sank my teeth into Objective-C it was still in beta and I thought it would be a good idea to learn Objective-C to understand the platform better anyway. Besides the Objective-C syntax (oh boy, who cooked that up) I was also faced with a rather unfamiliar framework: Cocoa. Though after some reading, Cocoa looked like the similar frameworks we have on Windows and Linux/X, but one thing stood out: its dispatch queues and Grand Central Dispatch. For my app idea I needed a lot of parallel processing and the queues made this easy, as in: you could think about parallel work in a naturally way: this is a piece of work I want to run in parallel with what you're running already, and take care of the rest for me, including work stealing, scheduling, the works. It matches what modern 3D engines do on multicore CPU/GPUs: chop up the work in small chunks and schedule those on the available cores. Suddenly I got new ideas how to do things in parallel in my designer and more importantly, do things in real time. To do that, I needed dispatch queues on .NET. I realized that .NET has this already in the form of the Task Platform Library (TPL), since .NET 4.0. Strange how things like that are comp[...]



Reply to "What ORMs have taught me: just learn SQL"

Tue, 05 Aug 2014 10:53:54 GMT

This is a reply to "What ORMs have taught me: just learn SQL" by Geoff Wozniak. I've spent the last 12 years of my life full time writing ORMs and entity modeling systems, so I think I know a thing or two about this topic. I'll briefly address some of the things mentioned in the article. Reading the article I got the feeling Geoff didn't truly understood the material, what ORMs are meant for and what they're not meant for. It's not the first time I've seen an article like this and I'm convinced it's not the last. That's fine; you'll find a lot of these kind of articles on many frameworks/paradigms/languages etc. in our field. I'd like to add that I don't know Geoff and therefore have to base my conclusions on the article alone. Re: intro The reference to the Neward article made me chuckle: sorry to say it but bringing that up always gives me the notion one has little knowledge of what an ORM does and what it doesn't do. An ORM is just a tool to translate between two projections of the same abstract entity model (class and table, which result in instances: object and table row); it doesn't magically make your crappy DB look like one designed by CELKO himself nor does it magically make your 12 level deep, 10K object wide graph persist to tables in a millisecond as if there was just 1 table. Neither will SQL for that matter, but Geoff (and Neward before him) silently ignores that. An ORM consists of two parts: a low level system which translates between class instances and table rows to transport the entity instances (== the data) back and forth, and a series of sub-systems on top of that to provide entity services (validation, graph persistence, unit of work, lazy / eager loading etc. etc.)It is not some sort of 'magic connector' which eats object graphs and takes care of transforming those to tabular data of some sort with which you don't want to know anything about. It also isn't a 'magic connector' which reads your insanely crappy relational model into a dense object graph as if you read the objects from memory. Re: Attribute Creep He mentions attribute creep (more and more attributes (==columns) per relation (==table)) and FKs in the same section, however I don't think one is related to the other. Having wide tables is a problem but it's a problem regardless of what you're using as a query system. Writing projections on top of an entity model is easy, if your ORM allows you to, but even if it doesn't, the wide tables are a problem of the way the database is set up: they'll be a problem in SQL as well as an ORM. What struck me as odd was that he has wide tables and also a problem with a lot of joins which sounds like he either has a highly normalized model, which should have resulted to narrow tables, or uses deep inheritance hierarchies. Nevertheless, if a projection requires 14 joins, it requires 14 joins: the data itself isn't obtainable in any other way otherwise it would be doable through the ORM as well (as any major ORM allows you to write a custom projection with joins etc. to obtain the data, materialized in instances of the class type you provide). It's hard to ignore the fact the author might have overlooked easy to use features (which hibernate provides) to overcome the problems he ran into and at the same time it's a bit odd a highly normalized model is the problem of the ORM and won't be a problem when using SQL (which has to work with the same normalized tables)He says: Attribute creep and excessive use of foreign keys shows me is that in order to use ORMs effectively, you still need to know SQL. My contention with ORMs is that, if you need to know SQL, just use SQL since it prevents the need to know how non-SQL gets translated to SQL. I agree with the fact that you still need to kn[...]



The gift that keeps on giving: Windows Store Accounts

Tue, 29 Jul 2014 09:03:39 GMT

In 2012, I thought it might be a good idea to register for a Windows Store Account, oh sorry, 'Windows Developer Services-account'. As you might recall, signing up was a bit of a pain. After a year, I decided to get rid of it as I didn't do anything with it nor did I expect to do anything with it in the future and as it costs money, I wanted to close the account. That too was a bit of a pain. To sign up for a Windows Store Account/Windows Developer Services-account, Microsoft outsources the verification process to Symantec. The verification process is to make sure that the person who signed up (me) really works at company X (I even own it) and Symantec is seen by Microsoft to be up to the task to do that. As you can read in my sign-up blog post, the process includes Symantec contacting a person other than the person who registered for a company who also has to be entitled to make sure that I am who I am. Is Symantec, a total different company than Microsoft, really up to the task? Well, let's see, shall we? As you can read above, I signed out of my Windows Store Account almost a year ago. One would think that by now Microsoft would have sent Symantec a memo in which they state that the individual 'Frans Bouma' is no longer a Windows Store developer card-carrier. In case they have (which I can't verify, pun intended), Symantec has a lousy way of keeping track, as last week my company received a lovely request from Symantec to verify with them whether 'Frans Bouma' was indeed working for my company and I was who I said I was. You know, for the Windows Developer Services account. Now the following might read like I stepped into the oldest phishing trap in the book, but everything checked out properly, we use plain text email only, copied URLs over, the URLs were simple and legit. We first thought it was spam/phishing so we ignored it. But this morning a new email arrived as a reminder. So we painstakingly went over every byte in the email and headers. Headers checked out (all routed through Verisign, now part of Symantec, and Symantec itself), URLs in the email checked out (we only look at plain text emails). The email was sent to the same person who verified me 2 years ago, and we concluded it must be legit. We had a good laugh about it, but what the heck, let's verify again. How would that work exactly, that verification process?So we copied the url from the plain text version of the email (which was a simple url into Symantec) to a browser, it arrived at Symantec, listed info about my account, and all that's there to be done is click the verify button. It's laughably simple: just click a button! I do recall the first time it was a phone call, but instead of getting rid of this whole Symantec bullshit, Microsoft decided apparently that clicking a button instead is equal to 'making things simpler'. After a couple of minutes, I received at my email box the email that cheered 'congratulations! I was re-verified and my Microsoft Developer Services account was renewed and I could keep developing apps for the windows store'. But… I ended my account almost a year ago? Or did I? To verify whether I really got rid of this crap or not, I went to the sites I went before to register and end the account, but they only showed me XBox Live stuff, no developer account info. Headers of reply email: Received: from spooler by sd.nl (**********************); 29 Jul 2014 10:03:43 +0200 X-Envelope-To: frans******************** Received: from authmail1.verisign.com (69.58.183.55) by ********************** (***********************) with Microsoft SMTP Server id 14.3.174.1; Tue, 29 Jul 2014 10:06:08 +0200 Received: from smtp5fo-d1-inf.sso-fo.ilg1.vrsn.com (smtp5fo-d1-inf.sso-fo.ilg1.vrsn.com [[...]



LLBLGen Pro v4.2 RTM has been released!

Wed, 02 Jul 2014 11:33:08 GMT

We've released LLBLGen Pro v4.2 RTM! v4.2 is a free upgrade for all v4.x licensees and if you're on v3.x, you can upgrade with a discount.

For what's new, I'd like to refer to the what's new page on the LLBLGen Pro website. (image)




LLBLGen Pro v4.2 BETA has been released

Mon, 02 Jun 2014 11:53:47 GMT

This morning we've released LLBLGen Pro v4.2 BETA! The beta is available to all v4 customers and can be downloaded from the customer area -> v4.2 section.Below is the extensive list of new / changed features. Enjoy! LLBLGen Pro v4.2 beta, what's new / changed.Main new features / changesGeneralAllowed Action Combinations: Specify which actions are allowed on an entity instance: Any combination of Create/Read/Update/Delete. Supported on: LLBLGen Pro Runtime Framework (all combinations, R mandatory), NHibernate (CRUD and R). Action Combinations make it easy to define e.g. entities which can only be created or read but never updated nor deleted. The action combinations are defined at the mapping level and checked inside the runtime and are additional to the authorization framework. DesignerCopy / Paste support for all model elements (entity, value type, typed list, typed view, table valued function call, stored procedure call): Paste full (with mappings and target tables) or just model elements, across instances (stand alone designer only) or within the project (VS.NET integration and standalone designer). Automatic re-apply changed settings on existing project: e.g. changing a pattern for a name will reapply the setting on the existing model, making sure the names comply with the setting value. New name patterns for auto-created FK/UC/PK constraints (model first).This makes it possible to define a naming pattern for e.g. FK constraints other than the default FK_{guid}. You can use macros to make sure the FK name reflects e.g. the fields and the tables it is referencing. It's now possible to save search queries in the project file. Ability to define default constraints for types, per type - DB combination (model first). This makes it possible to for example define a custom type, e.g. EmailAddress, based on the .NET string type, with length 150 and a default of "undefined@example.com" for SQL Server and then define a field in an entity with type 'EmailAddress'. Creating the database tables from this model in the designer will then result in a default constraint on the table field the email address field is mapped on with value "undefined@example.com". General editors per project element type: one editor which is kept open and will show the selected element in the project explorer, making it very easy to check / edit configurations on multiple elements. This will make it possible to e.g. edit or look at mapping data for several entities quickly by opening the general entity editor and opening the field mappings tab while selecting the entities to check / edit in the project explorer: the field mappings tab is kept the tab visible so the data of the selected entity is shown each time. Intellisense helpers in QuickModel for types, names and relationship types:It's now possible to open helper lists of names in scope, types available and the list of relationship types to help you write quick model expressions more easily. Hide / Filter warnings:It's now possible to hide / filter out warnings in the error/warning pane based on warning ID. The hidden/filtered out warnings are viewable again using a toggle and which IDs are filtered out is stored in the project. Element selection rules on tasks (code generator). It's now possible to define selection rules on tasks in a run queue for the code generator which select which elements participate in the task, based on setting values. This makes it easy to define a setting for a user which is then taken into account in the code generator to execute different tasks based on the value of the setting. New refactoring: replace selected fields with existing value type. This makes i[...]



Jetbrains' InspectCode result file viewer

Mon, 03 Mar 2014 11:51:24 GMT

Yesterday I was looking for some C# analysis tools, but they either were very expensive or came with add-ins like Resharper. Nothing against these add-ins except that I'm not very fond of having loads of extensions in my IDE as it feels like they slow down the IDE too much at times. That can be me, or the solutions I work with, that doesn't really matter, I simply can't stand the slowness. There's however a solution for that, Jetbrains have been so kind to release their Resharper analysis engine as a free commandline tool. This tool does all the analysis solution wide like Resharper but when I want it to do so, which is excellent. The downside is… it produces an xml file which isn't that useful without some tool.

As writing code is fun (at least, it should be), and after I put aside the usual worry of an old-age developer seeing all kinds of problems on the horizon before one line of code even has been written, I decided to create a simple viewer for these files. And here it is, written in one afternoon, so I hope I didn't make too many mistakes. I dubbed it 'InspectCodeResultViewer', you know how good I am in naming things, so I thought I would go with the bland obvious (image) .

Source: Github repository. Binary: v1.0 release

Screenshot:

(image)

Features

  • Groups issues per project, file, category
  • Shows wiki links on issues when available
  • Opens file in vs.net and navigates to the right line.

The code contains among other things helpful and very fast XML file reader code and a ready to use class to navigate to a file in a VS.NET instance.

Hopefully it's helpful to someone!




re: Create benchmarks and results that have value

Thu, 13 Feb 2014 11:16:00 GMT

Kelly Sommers wrote a blogpost called 'Create benchmarks and results that have value' in which she refers to my last ORM benchmark post and basically calls it a very bad benchmark because it runs very few iterations (10) and that it only mentions averages (I do not, the raw results are available and referred at in the post I made). Now, I'd like to rectify some of that because it now looks like what I posted is a large pile of drivel. The core intent of the fetch performance test is to see which code is faster in materializing objects, which framework offers a fast fetch pipeline and which one doesn't, without any restrictions. So there's no restriction on memory, number of cores used, whether or not it utilizes magic unicorns or whether or not a given feature has to be there, everything is allowed, as it just runs a fetch query like a developer would too. If a framework would spawn multiple threads to materialize objects, use a lot of static tables to be able to fetch the same set faster in a second attempt (so memory would spike), that will go unnoticed. This is OK, for this test. It's not a scientific benchmark setup in a lab with dedicated hardware run for weeks, it's a test run on my developer workstation and my server.  I never intended it to be anything else than an indication. The thing is that what I posted, which uses publically available code, is an indication, not a scientific benchmark for a paper. The sole idea behind it is that if code run 10 times is terribly slow, it will be slow a 100 times too. It's reasonable to assume that as none of the frameworks has self-tuning code (there's no .NET ORM which does), nor does the CLR have self-tuning code (unlike the JVM for example). The point wasn't to measure how much slower framework X is compared to framework Y, but which ones are slower than others: are the ones you think are fast really living up that expectation? That's all, nothing more. That Entity Framework and NHibernate are slower than Linq to SQL and LLBLGen Pro in set fetches is clear. By how much exactly is not determinable by the test results I posted as it depends on the setup. In my situation, they're more than 5-10 times slower, on another system with a different network setup the differences might be less extreme, but the differences will still be there, likewise the difference between the handwritten materializer and full ORMs. The core reason for that is that the code used in the fetch pipelines is less optimal in some frameworks than in others, or that they perform more features with every materialization step than others. For example, LLBLGen Pro for each row read from the DB, it goes through its Dependency Injection framework, checks for authorizers, auditors, validators etc. to see whether the fetch of that particular row is allowed, converts values if necessary to different types, all kinds of features which can't be skipped because the developer relies on them, however they all add a tiny bit of overhead to each row. Fair or unfair? Due to this, my ORM will never be faster than Linq to SQL, because Linq to SQL doesn't do all that and has a highly optimized fetch pipeline. If I pull all the tricks I know (and I already do all that), my pipeline will be highly optimal but I still have the overhead of the features I support. Looking solely at the numbers, even if they come from a cleanroom test with many iterations, it doesn't give you a full picture. This is also the reason there's one distinction made between micro-ORMs and full ORMs, and another on change tracked fetches and readonly fetches and for example there's no[...]



Fetch performance of various .NET ORM / Data-access frameworks, part 2

Tue, 11 Feb 2014 17:07:00 GMT

This is the second post about fetch performance of various .NET ORM / data-access frameworks. The first post, which has lots of background information can be found here. In this second post I'll post new results, including results from frameworks which were included after the previous post. The code used is available on GitHub. I'd like to thank Jonny Bekkum for adding benchmark code for many of the frameworks which were added after the previous post. Entity Framework In the previous benchmark run, it clearly showed that Entity Framework 6 and NHibernate were, well… slow. I filed a bug with the Entity Framework team and they created a workitem for it for v6.1. They were able to make Entity Framework perform better by 20-30%, but only in the situation where foreign key fields were present in the model. I discovered that that wasn't the case in the previous setup. The performance tests shown today are with foreign key fields present as other frameworks fetch these too and it's only fair that they're present to get a full picture. If they're not included, performance of v6.1 is slightly faster than 6.0.2 (~3000ms in 6.0.2 to ~2830ms in 6.1). Telerik Data Access New in this benchmark is, among others, Telerik DataAccess. Jonny added two versions, one using normal domain mappings and one using the code-first Fluent Mappings. These mappings were generated using the Telerik Wizard. However, this wizard creates dreadful code: the generated mappings class contains a vital flaw: without altering the code, it will create the meta-data model every time which obviously is tremendously slow. See the class for details (line 34-41) The setup Since the previous setup, I've replaced my old server with a new one and have installed SQL Server 2012 on the OS itself instead of in a VM so the results are little better overall than before because of this. Server: Windows Server 2012 64bit, SQL Server 2012 on i5-4670 @ 3.4ghz with 8GB ram, raid 1 HDDs. Client: Windows 8.0 32bit, .NET 4.5.1 on a Core2Quad @ 2.4ghz with 4GB ram Network: 100BASE-T Database: AdventureWorks 2008 Entity Fetched: SalesOrderHeader For a discussion about this setup and why it is done this way, please see the first post. The code The code has been drastically improved since the first post. It's now a properly designed application with a runner and easy to implement benchmark classes so it's easy to add another framework to the system. Please have a look at the code at the github repository. The results The raw results can be found here. They include the full output as well as the aggregated results in a nice list. Additionally to the previous benchmark run, individual fetch tests have been added as well as tests how fast enumerating the fetched collection is. This is done to discover which frameworks offload work to when the program actually consumes the result-set which can hide the real performance of the set fetch if this isn't taken into account. The Entity Framework v6.1 results are from a separate batch run before the results seen below, and are solely meant to illustrate progress made in v6.1. With change tracking (10 runs, averaged, fastest/slowest ignored) 'With change tracking' is as the name implies, a fetch of elements which can be used in read/write scenarios: the changes made to the data are tracked and can be easily persisted. In general this takes some extra work (not much though, if you're clever ) Framework, with change-tracking Fetch avg. (ms) Enumeration avg. (ms) DataTable, using DbDat[...]



Microsoft and developer trust (or lack thereof)

Thu, 23 Jan 2014 12:55:51 GMT

There has been some talk around several internet outlets about the (seemingly) eroding trust developers have in Microsoft and its techniques (see David Sobeski's piece here, Tim Anderson's piece here and e.g. the Reddit Programming thread here). Trust is the keyword here and in my opinion it's essential to understand what that means in the context of a software developer to understand the problem at hand, or even to acknowledge that there is / isn't a problem. I try to explain below what I think trust means in this context. Every day some individual or team comes up with a new tool, framework, version of an existing tool / framework, new OS or even a new way to bake those nice chocolate cookies. We as humanity tend to call this progress, even though in many occasions the (small) step taken by said individual or team is actually a (small) step sideways or even backwards. To understand why these individuals and teams still come out in the open to demonstrate their results which sometimes result in total lack of real progress, we first have to understand how product development works and why people even do it. For every successful product (be it a commercial tool/framework/system, open source system/framework/tool or a cookie recipe, you the idea) out there, there are many many more failures, left for dead in repositories on dying harddrives on local servers or cloud storage. The core reason this happens is that there's no recipe for a successful product: to be successful you first have to create something, ship something. No shipping, no success, and as there's no way to know whether success will come after shipping, all we can do is ship as much as possible so the chances are higher that something is picked up and success eventually comes. The rest of the products are ignored, kicked to the curb and eventually moved to /dev/null. The motivation to keep doing this of course is the vague 'success' term used in the previous paragraph. 'Success' can mean a lot of different things: become famous, earn lots of money, learn a lot, get a respected title, become respected among peers, the list is endless. All these different things motivate people to create new things, to ship their work in the hope it will give them the success they strive for. Microsoft isn't different in that, as a product producer they have to keep on shipping new, relevant products in the hope these products become successful and with that will make the company money so it lives on. Microsoft has released a tremendous amount of products in the past decades: from frameworks to languages to operating systems to APIs to word processors. Some were successful, others keeled over right after their shiny boxes appeared on store shelves. The successful ones are continued, the less successful ones are abandoned and replaced with a new attempt to gain success. Over time even successful ones were abandoned or replaced because of a different phenomenon: the interest in the product faded, as illustrated in the technology adoption lifecycle. So in short, Microsoft behaved like any other product producing individual / team: they produced as many products as possible in the hope that as many of them would become successful. That combined with the technology adoption lifecycle in hand, it's clear that to stay successful one has to keep producing products. That is, from the point of view of the product producer. But what about the product consumer, the person who bought into these products, learned how they worked, behaved, what their limits were, in short to[...]