Subscribe: Sean McGrath
http://seanmcgrath.blogspot.com/rss/seanmcgrath.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
acts  corpus law  corpus  data  law part  law  legal  model  new  opinion  part previously  part  regulations  time  world 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Sean McGrath

Sean McGrath



Sean McGrath's Weblog.



Last Build Date: Thu, 20 Jul 2017 12:33:07 +0000

 



What is Law? - part 15

Wed, 19 Jul 2017 11:25:00 +0000

Previously: What is Law? - part 14. In part one of this series, a conceptual model of legal reasoning was outlined based on a “black box” that can be asked legal type questions and give back legal type answers/opinions. I mentioned an analogy with the “Chinese Room” used in John Searle's famous Chinese Room thought experiment[1] related to Artificial Intelligence. Simply put, Searle imagines a closed room into which symbols (Chinese language ideographs) written on cards, can be inserted via a slot. Similar symbols can also emerge from the room. To a Chinese speaking person outside the room inserting cards and and receiving cards back, whatever is inside the room appears to understand Chinese. However, inside the box is simply a mechanism that matches input symbols to output symbols, with no actual understanding of Chinese at all. Searle's argument is that such a room can manifest “intelligence” to a degree, but that it is not understanding what it is doing in the way a Chinese speaker would. For our purposes here, we imagine the symbols entering/leaving the room as being legal questions. We can write a legal question on a card, submit it into the room and get an opinion back. At one end of the automation spectrum, the room could be the legal research department shared by partners in a law firm. Inside the room could be lots of librarians, lawyers, paralegals etc. taking cards, doing the research, and writing the answer/opinion cards to send back out. At the other end of the spectrum, the room could be a fully virtual room that partners interact with via web browsers or chat-bots or interactive voice assistants. Regardless of where we are on that spectrum, the law firm partners will judge the quality of such a room by its outputs. If the results meet expectations, then isn't it a moot point whether or not the innards of the room in some sense “understand” the law? Now let us imagine that we are seeing good results come from the room and we wish to probe a little to get to a level of comfort about the good results we are seeing. What would we do to get to a level of comfort? Well, most likely, we would ask the virtual box to explain its results. In other words, we would do exactly what we would do with any person in the same position. If the room can explain its reasoning to our satisfaction, all is good, right? Now this is where things get interesting. Imagine that each legal question submitted to the room generates two outputs rather than one. The first being the answer/opinion in a nutshell (“the parking fine is invalid : 90% confident.”). The second being the explanation “The reasoning as to why the parking fine is invalid is as follows....”). If the explanation we get is logical i.e. it proceeds from facts through inferences to conclusions, weighing up the pros and cons of each possible line of reasoning....we feel good about the answer/opinion. But how can we know that the explanation given is actually the reasoning that was used in arriving at the answer/opinion? Maybe the innards of the room just picked a conclusion based on its own biases/preferences and then proceeded to back-fill a plausible line of reasoning to defend the answer/opinion it had already arrive at? Now this is where things may get a little uncomfortable. How can we know for sure that a human presenting us with a legal opinion and an explanation to back it up, is not doing exactly the same thing? This is an old old nugget in jurisprudence, re-cast into today's world of legal tech and Artificial Intelligence. Legal scholars refer to it as the conflict between so-called rationalist and realist models of legal reasoning. It is a very tricky problem because recent advances in cognitive science have shone a somewhat uncomfortable light on what actually goes on in our mental decision making processes. Very briefly, we are not necessarily the bastions of cold hard logic that we might think we are. This is not just true in the world of legal reasoning, by the way. The same is true for all form[...]






Blockchain and Byzantium

Tue, 27 Jun 2017 09:34:00 +0000

Establishing authenticity - "single sources of truth" is a really important concept in the real world and in the world of computing.  From title deeds, to contracts, to laws and currencies, we have evolved ways of establishing single sources of truth over many centuries of trial and error.

Knowingly or not, many of the ways of solving the problem rely on the properties of physical objects: clay tablets (Code of Hammurabi), Bronze Plates (The Twelve Tables of Rome), Goat Skin (Celtic Brehon Laws). Typically, this physicality is mixed in with a bit of trust. Trust in institutions. Trust in tamper evidence. Trust in probabilities.

Taken together: the physical scheme aspect, plus the trust aspect, allows the establishment of consensus. It is consensus, at the end of the day, that makes all this stuff work in the world of human affairs. Simply put, if enough of us behave as though X is the authentic deed/deposition/derogation/dollar then X is, ipso facto, for all practical purposes, the real deal.

In the world of digital data, consensus is really tricky because trust becomes really tricky. Take away the physicality of objects and establishing trust in the truth/authenticity of digital objects is hard.

Some folk say that blockchain is slow and inefficient and they are right - if you are comparing it to today's consensus as to what a "database" is.

Blockchain is the way it is because it is trying to solve the trust problem. A big part of that is what is called Byzantine Consensus. Basically how to establish consensus when all sorts of things can go wrong, ranging from honest errors to sabotage attempts.

The problem is hard and also very interesting and important in my opinion. Unfortunately today, many folks see the word "database" associated with blockchain and all they see is the incredible inefficiency and cost per "transaction" compared to, say, a relational database with ACID properties.

Yes, blockchain is a truly dreadful "database" - if your metric for evaluation is the same as the use cases for relational databases.

Blockchain is not designed to be one of those. Blockchain is the way it is because byzantine consensus is hard. Is it perfect? Of course not but a proper evaluation of it requires looking at the problems it is trying to solve. Doing so, requires getting past common associations most people carry around in their heads about what a "database" is and how it should behave/perform.

Given the unfortunate fact that the word "database" has become somewhat synonymous with the term "relational database", I find it amusing that Blockchain has itself become a byzantine consensus problem. Namely, establishing consensus about what words like  "database" and "transaction" and "trust" really mean.





What is Law? - part 14

Wed, 14 Jun 2017 12:36:00 +0000

Previously: What is Law? - part 12a Mention has been made earlier in this series to the presence of ambiguity in the corpus of law and the profound implications that the presence of ambiguity has on how we need to conceptualize computational law, in my opinion. In this post, I would like to expand a little on the sources of ambiguity in law. Starting with the linguistic aspects but then moving into law as a process and an activity that plays out over time, as opposed to being a static knowledge object. In my opinion, ambiguity is intrinsic in any linguistic formalism that is expressive enough to model the complexity of the real world. Since law is attempting to model the complexity of the real world, the ambiguity present in the model is necessary and intrinsic in my opinion. The linguistic nature of law is not something that can be pre-processed away with NLP tools, to yield a mathematically-based corpus of facts and associated inference rules. An illustrative example of this can be found in the simple sounding concept of legal definitions. In language, definitions are often hermeneutic circles[1] which are formed whenever we define a word/phrase in terms of other words/phrases. These are themselves defined in terms of yet more words/phrases, in a way that creates definitional loops. For example, imagine a word A that is defined in terms of words B, and C. We then proceed to define both B and C to try to bottom out the definition of A. However, umpteen levels of further definition later, we create a definition which itself depends on A – the very thing we are trying to define - thus creating a definitional loop. These definitional loops are known as hermeneutic circles[1]. Traditional computer science computational methods hate hermeneutic circles. A large part of computing consists of creating a model of data that "bottoms out" to simple data types. I.e. we take the concept of customer and boil it down into a set of strings, dates and numbers. We do not define a customer in terms of some other high level concept such as Person which might, in turn, be defined as a type of customer. To make a model that classical computer science can work on, we need a model that "bottoms out" and is not self-referential in the way hermeneutic circles are. Another way to think about the definition problem is in term of Saussure's linguistics[2] in which language (or more generically "signs") get their meaning because of how they differ from other signs - not because they "bottom out" into simpler concepts. Yet another way to think about the definition problem is in terms of what is known as the descriptivist theory of names[3] in which nouns can be though of as just arbitrary short codes for potentially open-ended sets of things which are defined by their descriptions. I.e. a "customer" could be defined as the set of all objects that (a) buy products from us, (b) have addresses we can send invoices to, (c) have given us their VAT number. The same hermeneutic circle/Sauserrian issue arises here however as we try to take the elements of this description and bottom out the nouns they depend on (e.g., in the above example, "products", "addresses", "invoices" etc.). For extra fun, we can construct a definition that is inherently paradoxical and sit back as our brains melt out of our ears trying to complete a workable definition. Here is a famous example: The 'barber' in town X is defined as the person in town X who cuts the hair of anyone in town who do not choose to cut their own hair. This sounds like a reasonable starting point for a definition of a 'barber', right? Everything is fine until we think about who cuts the barber's hair[4]. The hard facts of the matter are that the real world is full of things we want to make legal statements about but that we cannot formally define, even though we have strong intuitions about what they are. What is [...]



What is law - part 12a

Wed, 07 Jun 2017 10:06:00 +0000

Previously: what is law part 12 Perhaps the biggest form of push-back I get from fellow IT people with respect to the world of law relates to the appealing-but-incorrect notion that in the text of the law, there lies a data model and a set of procedural rules for operating on that data model, hidden inside the language. The only thing stopping us computerizing the law, according to this line of reasoning, is that we just need to get past all the historical baggage of foggy language and extract out the procedural rules (if-this-then-that) and the data model (definition of a motor controlled vehicle, definition of 'theft', etc.). All we need to do is leverage all our computer science knowledge with respect to programming languages and data modelling, combine it with some NLP (natural language processing) so that we can map the legacy linguistic form of law into our shiny new digital model of law. In previous parts in this series I have presented a variety of technical arguments as to why this is not correct in my opinion. Here I would like to add some more but this time from a more sociological perspective. The whole point of law, at the end of the day, is to allow society to regulate its own behavior, for the greater good of that society. Humans are not made from diamonds cut at right angles. Neither are the societal structures we make for ourselves, the cities we build, the political systems we create etc. The world and the societal structures we have created on top of it are messy, complex and ineffable. Should we be surprised that the world of law which attempts to model this, is itself, messy, complex and ineffable? We could all live in cities where all the houses are the same and all the roads are the same and everything is at right angles and fully logical. We could speak perfectly structured languages where all sentences obey a simple set of structural rules. We could all eat the same stuff. Wear the same clothes. Believe in the same stuff...but we do not. We choose not to. We like messy and complex. It suits us. It reflects us. In any form of digital model, we are seeking the ability to model the important stuff. We need to simplify - that is the purpose of a model after all - but we need to preserve the essence of the thing modeled. In my opinion, a lot of the messy stuff in law is there because law tries to model a messy world. Without the messy stuff, I don't see how a digital model of law can preserve the essence of what law actually is. The only outcome I can imagine from such an endeavor (in the classic formulation of data model + human readable rules) is a model that fails to model the real world. In my opinion, this is exactly what happened in the Eighties when people got excited about how Expert Systems[1] could be applied to law. In a nutshell, it was discovered that the modelling activity lost so much of the essence of law, that the resultant digital systems were quite limited in practice. Today, as interest in Artificial Intelligence grows again, I see evidence that the lessons learned back in the Eighties are not being taken into account. Today we have XML and Cloud Computing and better NLP algorithms and these, so the story goes, will fix the problems we had in the Eighties. I do not believe this is the case. What we do have today, that did not exist in the Eighties, is much much better algorithms for training machines - not programming them  to act intelligently - training them to act intelligently. When I studied AI in the Eighties, we spent about a week on Neural Networks and the rest of the year on expert systems i.e. rules-based approaches. Today's AI courses are the other way around! Rightly so, in my opinion because there has not been any great breakthrough in the expert systems/business rules space since the Eighties. We tried all the rules-based approaches in the Eighties. A lot of great comp[...]



The Great Inversion in Computing

Wed, 31 May 2017 10:32:00 +0000

Methinks we may be witnessing a complete inversion in the computing paradigm that has dominated the world since the Sixties.

In 1968, with Algol68[1] we started treating algorithms as forms of language. Chomsky's famous hierarchy of languages[2] found a huge new audience outside of pure linguistics.

In 1970, relational algebra came along[3] and we started treating data structures as mathematical objects with formal properties and theorems and proofs etc. Set theory/operator theory found a huge new audience outside of pure mathematics.

In 1976, Nicklaus Wirth published "Algorithms + Data Structures =  Programs"[4] crisply asserting that programming is a combination of algorithms and data structures.

The most dominant paradigm since the Sixties maps algorithms to linguistics (Python, Java etc.) and data structures to relational algebra (relational  databases, third normal form etc.).

Todays Deep Learning/AI etc. seems to me to be inverting this mapping. Algorithms are becoming mathematics and data is becoming linguistic e.g. "unstructured" text/documents/images/video etc.

Perhaps we are seeing a move towards "Algorithms (mathematics) + data structures (language) = Programs" and away from "Algorithms (language) + data structures (mathematics) = Programs"

[1] https://en.wikipedia.org/wiki/ALGOL_68
[2] https://en.wikipedia.org/wiki/Chomsky_hierarchy
[3] https://en.wikipedia.org/wiki/Relational_algebra
[4] https://en.wikipedia.org/wiki/Algorithms_%2B_Data_Structures_%3D_Programs



What is law? - part 12

Tue, 16 May 2017 10:14:00 +0000

Previously : what is law? - part 11 There are a few odds and ends that I would like to bundle up before proceeding. These are items that have occurred to me since I wrote the first What is Law? post back in March. Items I would have written about earlier in this series, if they had occurred to me. Since I am writing this series as I go, this sort of thing is inevitable I guess. Perhaps if I revisit the material to turn it into an essay at some point, I will fold this new material in at the appropriate places. Firstly, in the discussion about the complexity of the amendatory cycle in legislation I neglected to mention that it is also possible for a new item of primary legislation to contain amendments to itself. In other words it may be that as soon as a bill becomes and act and is in force, it is immediately necessary to modify it using modifications spelled out in the act itself. Looking at it another way, a single Act can be both a container for new law and a container for amendatory instructions, all in one legal artifact. Why does this happen? Legislation can be crafted over long periods of time and consensus building may proceed piece by piece. In a large piece of legislation, rather than continually amending the whole thing – perhaps thousands of pages – sometimes amendments are treated as additional material tacked on the end so as to avoid re-opening debate – and editorial work - on material already processed through the legislative process. It is a bit of a mind bender. Basically if an Act becomes law at time T then it may instantaneously need to be codified in itself before we can proceed to codify it into the broader corpus. Secondly, I mentioned that there is no central authority that controls the production of law. This complicates matters for sure but it also has some significant benefits that I would like to touch on briefly as the benefits are significant. Perhaps the biggest benefit of the de-centralized nature of law making is that it does not have a single point of failure. In this respect, it is reminiscent of the distributed packet routing protocol used on the internet. Various parts of the whole system are autonomic resulting in an overall system that is very resilient as there is no easy way to interrupt the entire process. This distribution-based resilience also extends into the semantic realm where it combine with the textual nature of law to yield a system that is resilient to the presence of errors. Mistakes happen. For example, a law might be passed that requires train passengers to be packaged in wooden crates. (Yes, this happened.). Two laws might be passed in parallel that contradict each other (yes, this has happened many times.) When this sort of thing happens, the law has a way of rectifying itself, leveraging the “common sense” you can get with human decision making. Humans can make logical errors but they have a wonderful ability to process contradictory information in order to fix up inconsistent logic. Also humans possess an inherent, individual interpretation of equity/fairness/justice and the system of law incorporates that, allowing all participants to evaluate the same material in different ways. Thirdly, I would like to return briefly to the main distinction I see between legal deductive logic and the deductive logic computer science people are more familiar with. When deductive logic is being used (remembering always that it is just one form of legal reasoning and rarely used on its own) in law, the classic “if this then that” form can be identified as well as classical syllogistic logic. However, legal reasoning involves weighing up the various applicable deductive statements using the same sort of dialectic/debate-centric reasoning mentioned earlier. Put another way, deductive logic in law very rarely proceeds from facts to conclusion in some nice tidy decision tree. Given the set o[...]



What is law? - part 11

Thu, 04 May 2017 16:39:00 +0000

Previously: what is law? - part 10 Gliding gracefully over all the challenges alluded to earlier with respect to extracting the text level meaning out of the corpus of Law at time T, we now turn to thinking about how it is actually interpreted and utilized by practitioners. To do that, we will continue with our useful invention of an infinitely patient person who has somehow found all of the primary corpus and read it all from the master sources, internalized it, and can now answer our questions about it and feed it back to us on demand. The first order of business is where to start reading? There are two immediate issues here. Firstly, the corpus is not chronologically accretive. That is, there is no "start date" to the corpus we can work from, even if, in terms of historical events, a foundation date for a state can be identified. The reasons for this have already been discussed. Laws get modified. Laws get repealed. Caselaw gets added. Caselaw gets repealed. New laws get added. I think of it like a vast stormy ocean, constantly ebbing and flowing, constantly adding new content (rainfall, rivers) and constantly loosing content (evaporation) - in an endless cycle. It has no "start point" per se. In the absence of an obvious start point, some of you may be thinking "the index", which brings us to the second issue. There is no index! There is no master taxonomy that classifies everything into a nice tidy hierarchy. There are some excellent indexes/taxonomies in the secondary corpus produced by legal publishers, but not in the primary corpus. Why so? Well, if you remember back to the Unbounded Opinion Requirement mentioned previously, creating an index/taxonomy is, necessarily, the creation of an opinion on the "about-ness" of a text in the corpus. This is something the corpus of law stays really quite vague about - on purpose - in order to leave room for interpretation of the circumstances and facts about any individual legal question. Just because a law was originally passed to do with electricity usage in phone lines, does not mean it is not applicable to computer hacking legislation. Just because a law was passed relating to manufacturing processes does not mean it has no relevance to ripening bananas. (Two examples based on real world situations, I have come across by the way.) So, we have a vast, constantly changing, constantly growing corpus. So big it is literally humanly impossible to read, regardless of the size of your legal team, and there are no finding aids in the primary corpus to help us navigate our way through it.... ...Well actually, there is one and it is an incredibly powerful finding aid. The corpus of legal materials is woven together by an amazingly intricate web of citations. Laws invariably cite other laws. Regulations cite laws. Regulations cite regulations. Caselaw cites law and regulations and other caselaw....creating a layer that computer people would call a network graph[1]. Understanding the network graph is key to understanding how practitioners navigate the corpus of law. The don't go page-by-page, or date-by-date, they go citation-by-citation. The usefulness of this citation network in law cannot be overstated. The citation network helps practitioners to find related materials, acting as a human-generated recommender algorithm for practitioners. The citation networks not only establish related-ness, they also establish meaning, especially in the caselaw corpus. We talked earlier about the open-textured nature of the legal corpus. It is not big on black an white definitions of things. Everything related to meaning is fluid on purpose. The closest thing in law to true meaning is arguably established in the caselaw. In a sense, the caselaw is the only source of information on meaning that really matters because at the end of the day, it doe[...]



Zen and the art of motorcycle....manuals

Wed, 26 Apr 2017 10:21:00 +0000

I heard the sad news about Robert Pirsig passing.

His book : Zen and the art of motorcycle maintenance was a big influence on me and piqued my interest in philosophy.

While writing the book his day job was writing computer manuals.

About 15 years ago, I wrote an article for ITWorld about data modelling with XML called Zen and the art of motorcycle manuals, inspired in part by Pirsig's book and his meditations on how the qualities in objects such as motorcycles are more than just the sum of the parts that make up the motorcycle.

So it is with data modelling. For any given modelling problem there are many ways to do it that are all "correct" at some level. Endlessly seeking to bottom out the search and find the "correct" model is a pointless exercise. At the end of the day "correctness" for any data model is not  a function of the data itself. It is a function of what you are planning to do with the data.

This makes some folks uncomfortable. Especially proponents of top-down software development methodologies who like to conceptualize analysis as an activity that starts and ends before any prototyping/coding begins.

Maybe somewhere out there Robert Pirsig is talking with Bill Kent - author of another big influence on my thinking : Data and Reality.

Maybe they are discussing how best to model a bishop :-)




What is law? - Part 10

Fri, 21 Apr 2017 15:52:00 +0000

Previously: What is Law? - Part 9 Earlier on in this series, we imagined an infinitely patient and efficient person who has somehow managed to acquire the entire corpus of law at time T and has read it all for us and can now "replay" it to us on demand. We mentioned previously that the corpus is not a closed world and that meaning cannot really be locked down inside the corpus itself. It is not corpus of mathematical truths, dependent only on a handful of axioms. This is not a bug to be fixed. It is a feature to be preserved. We know we need to add a layer of interpretation and we recognize from the outset that different people (or different software algorithms) could take this same corpus and interpret it differently. This is ok because, as we have seen, it is (a) necessary and (b) part of the way law actually works. Interpreters differ in the opinions they arrive at in reading the corpus. Opinions get weighed against each other, opinions can be over-ruled by higher courts. Some courts can even over-rule their own previous opinions. Strongly established opinions may then end up appearing directly in primary law or regulations, new primary legislation might be created to clarify meaning...and the whole opinion generation/adjudication/synthesis loop goes round and round forever... In law, all interpretation is contemporaneous, tentative and de-feasible. There are some mathematical truths in there but not many. It is tempting - but incorrect in my opinion - to imagine that the interpretation process works with the stream of words coming into our brains off of the pages, that then get assembled into sentences and paragraphs and sections and so on in a straightforward way. The main reason it is not so easy may be surprising. Tables! The legal corpus is awash with complex table layouts. I included some examples in a previous post about the complexties of law[1]. The upshot of the use of ubiquitous use of tables is that reading law is not just about reading the words. It is about seeing the visual layout of the words and associating meaning with the layout. Tables are  such a common tool in legal documents that we tend to forget just how powerful they are at encoding semantics. So powerful, that we have yet to figure out a good way of extracting back out the semantics that our brains can readily see in law, using machines to do the "reading". Compared to, say, detecting the presence of headings or cross-references or definitions, correctly detecting the meaning implicit in the tables is a much bigger problem. Ironically, perhaps, much bigger than dealing with high visual items such as  maps in redistricting legislation[2] because the actual redistricting laws are generally expressed purely in words using, for example, eastings and northings to encode the geography. If I could wave a magic wand just once at the problem of digital representation of the legal corpus I would wave it at the tables. An explicit semantic representation of tables, combined with some controlled natural language forms[4] would be, I believe, as good a serialization format as we could reasonably hope for, for digital law. It would still have the Closed World of Knowledge problem of course. It would also still have the Unbounded Opinion Requirement but at least we would be in position to remove most of the need for a visual cortex in this first layer of interpreting and reasoning about the legal corpus. The benefits to computational law would be immense. We could imagine a digital representation of the corpus of law as an enormous abstract syntax tree[5] which we could begin to traverse to get to the central question about how humans traverse this tree to reason about it, form opinions about it, and create legal arguments in support of their opinions. Next up: What is law? - Part 11. [1] http://[...]



What is law? - Part 9

Wed, 19 Apr 2017 13:24:00 +0000

Previously: What is law? - Part 8 For the last while, we have been thinking about the issues involved in interpreting the corpus of legal materials that is produced by the various branches of government in US/UK style environments. As we have seen, it is not a trivial exercise because of the ways the material is produced and because the corpus - by design - is open to different interpretations and open to interpretation changing with respect to time. Moreover, it is not an exaggeration to say that it is a full time job - even within highly specialized sub-topics of law - to keep track of all the changes and synthesize the effects of these changes into contemporaneous interpretations. For quite some time now - centuries in some cases - a second legal corpus has evolved in the private sector. This secondary corpus serves to consolidate and package and interpret the primary corpus, so that lawyers can focus on the actual practice of law. Much of this secondary corpus started out as paper publications, often with so-called loose-leaf update cycles. These days most of this secondary corpus is in the form  of digital subscription services. The vast majority of lawyers utilize these secondary sources from legal publishers. So much so that over the long history of law, a number of interesting side-effects have accrued. Firstly, for most day-to-day practical purposes, the secondary corpus provides de-facto consolidations and interpretations of the primary corpus. I.e. although the secondary sources are not "the law", they effectively are. The secondary sources that are most popular with lawyers are very high quality and have earned a lot of trust over the years from the legal community. In this respect, the digital secondary corpus of legal materials is similar to modern day digital abstractions of currency such as bank account balances and credit cards etc. I.e. we trust that there are underlying paper dollars that correspond to the numbers moving around digital bank accounts. We trust that the numbers moving around digital bank accounts could be redeemed for real paper dollars if we wished. We trust that the real paper dollars can be used in value exchanges. So much so, that we move numbers around bank accounts to achieve value exchange without ever looking to inspect the underlying paper dollars. The digital approach to money works because it is trusted. Without the trust, it cannot work. The same is true for the digital secondary corpus of law, it works because it is trusted. A second, interesting side-effect of trust in the secondary corpus is that parts of it have become, for all intents and purposes, the primary source. If enough of the worlds legal community is using secondary corpus X then even if that secondary corpus differs from the primary underlying corpus for some reason, it may not matter in practice because everybody is looking at the secondary corpus. A third, interesting side effect of the digital secondary corpus is that it has become indispensable. The emergence of a high quality inter-mediating layer between primary legal materials and legal practitioners has made it possible for the world of law to manage greater volumes and greater change rates in the primary legal corpus.  Computer systems have greatly extended this ability to cope with volume and change. So much so, that law as it is today would collapse if it were not for the inter-mediating layer and the computers. The classic image of a lawyers office involves shelves upon shelves of law books. For a very long time now, those shelves have featured a mix of primary legal materials and secondary materials from third party publishers. For a very long time now, the secondary materials have been the day-to-day "go to" volumes for legal practitioners - not the primary vol[...]



What is law? - part 8

Fri, 14 Apr 2017 16:27:00 +0000

Previously:  what is law? - Part 7. A good place to start in exploring the Closed World of Knowledge (CWoK) problem in legal knowledge representation is to consider the case of a spherical cow in a vacuum... Say what? The spherical cow in a vacuum[1] is a well known humorous metaphor for a very important fact about the physical world. Namely, any model we make of something in the physical world, any representation of it we make inside a mathematical formula or a computer program, is necessarily based on simplifications (a "closed world") to make the representation tractable. The statistician George Box once said that "all models are wrong, but some are useful." Although this mantra is generally applied in the context of applied math and physics, this concept is incredibly important in the world of law in my opinion. Law can usefully be thought of as an attempt at steering the future direction of the physical world in a particular direction. It does this by attempting to pick out key features of the real world (e.g. people, objects, actions, events) and making statements about how these things ought to inter-relate (e.g. if event E happens, person P must perform action A with object O). Back to cows now. Given that the law may want to steer the behavior of the world with respect to cows, for example, tax them, regulate how they are treated, incentivize cow breeding programs etc. etc., how does law actually speak about cows? Well, we can start digging through legislative texts to find out but what we will find is not the raw material from which to craft a good definition of a cow for the purposes of a digital representation of it. Instead, we will find some or all of the following: Statements about cows that do not define cows at all but proceed to make statements about them as if we all know exactly what is a cow and what is not a cow Statements that "zoom in" in cow-ness without actually saying "cow" explicitly e.g. "animals kept on farms", "milk producers" etc, Statements that punt on the definition of a cow by referencing the definition in some outside authority e.g. an agricultural taxonomy Statements that "zoom in" on cow-ness by analogies to other animals eg. "similar in size to horses, bison and camels." Statements that define cows to be things other than cows(!) e.g. "For the purposes of this section, a cow is any four legged animal that eats grass." What you will not find anywhere in the legislative corpus, is a nice tidy, self contained mathematical object denoting a cow, fully encapsulated in a digital form. Why? Well, the only way we could possibly do that would be to make a whole bunch of simplifications on "cow-ness" and we know where that ends up. It ends up with spherical objects in vacuums just as it does in the world of physics! There is simply no closed world model of a cow that captures everything we might want to capture about cows in laws about cows. Sure, we could keep adding to the model of a cow, refining it, getting it close and closer to cow-ness. However, we know from the experience of the world of physics that we reach the point where have to stop, because it is a bottomless refinement process. This might sound overly pessimistic or pedantic and in the case of cows for legislative purposes it clearly is, but I am doing it to make a point. Even everyday concepts in law such as aviation, interest rates and theft are too complex (in the mathematical sense of complex) to be defined inside self-contained models. Again, fractals spring to mind. We can keep digging down into the fractal boundary that splits the world into cow and not-cow. Refining our definitions until the cows come home (sorry, could not re[...]



What is law? - Part 7

Fri, 07 Apr 2017 12:34:00 +0000

Previously: What is law? - Part 6 Last time we ended with the question : “Given a corpus of law at time T, how can we determine what it all means?” There is a real risk of disappearing down a philosophical rabbit hole about how meaning is encoded in the corpus of law. Now I really like that particular rabbit hole but I propose that we not go down it here This whole area is best perused, in my experience, with comfy chairs, time to kill and a libation or two (semiotics, epistemolgy and mereotopology anyone?). Instead, we will simply state that because the corpus of law is mostly written human language it inherits some fascinating and deep issues to do with how written text establishes shared meaning and move on. For our purposes, we will imagine an infinitely patient person with infinite stamina, armed with a normal adults grasp of English, who is going to read the corpus and explain it back to us, so that we computer people can turn it into something else inside a computer system. The goal of that “something else” being to capture the meaning but be easier to work with inside a computer than a big collection of “unstructured” documents. This little conceptual trick of employing a fantastic human to read the current corpus and explain it all back to us, allows us to split the problem of meaning into two parts. The first part relates to how we could read it in its current form and extract its meaning. The second part relates to how we would encode the extracted meaning in something other than a big collection of unstructured documents. Exploring this second question, will, I believe, help us tease out the issues in determining meaning in the corpus of law in general, without getting bogged down in trying to get machines to understand the current format (lots and lots of unstructured documents!) right off the bat. I hope that makes sense? Basically, we are going to skip over how we would parse it all out of its current myriad document-form into a human brain and instead look at how we would extract it from said brain and store it again – but into something more useful than a big collection of documents. Assuming we can find a representation that is good enough, the reading of the current corpus should be a one-off exercise because as the corpus of law gets updated, we would update our bright shiny new digital representation of the corpus and never have to re-process all the documents ever again. So what options do we have for this digital knowledge representation? Surely there is something better than just unstructured document text? Text after all, is what you get if you use computers as typewriters. Computers do also give us search, which is a wonderful addition to typesetting, but understanding is a very different thing again. In order to have machines understand the corpus of law we need a way to represent the knowledge present in the law - not just what words are present (search) or how the words look on the page (formatting). This is the point where some of you are likely hoping/expecting that I am about to suggest some wonderful combination of XML and Lisp or some such that will fit the bill as a legal corpus knowledge representation alternative to documents... It would be great if that were possible but in my opinion, the textual/document-centric nature of a significant part of the legal corpus is unavoidable for reasons I will hopefully explain. Note that I said “significant part”. There are absolutely components of the corpus that do not have to be documents. In fact, some of the corpus has, already transitioned out of documents but, if anything, this has actually increased the interpretation complexities – of establishing meaning - not reduced them. [...]



What is law? - Part 6

Fri, 31 Mar 2017 12:33:00 +0000

Previously: What is law? - Part 5. To wrap up our high level coverage of the sources of law we need to add a few items to the “big 3” (Statutes/Acts, Regulations/Statutory Instruments and Case law) covered so far. Many jurisdictions have a foundational written document called a constitution which essentially "bootstraps" a jurisdiction by setting out its core principles, how its government is to be organized, how law will be administered etc. The core principles expressed in constitutions are, in many respects, the exact opposite of detailed rules/regulations. They tend to be deontic[1] in nature, that it, they express what ought to be true. They tend to be heavily open textured[2] meaning that they refer to concepts that are necessarily abstract/imprecise (e.g. concepts such as "fairness", "freedom" etc.). Although they only make up a tiny tiny fraction of the corpus of law in terms of word count, they are immensely important, as essentially everything that follows on from the constitution in terms of Statutes/Acts, Regulations/Statutory Instruments and case law has to be compatible with the constitution. Like everything else, the constitution can be changed and thus all the usual "at time T" qualifiers apply to constitutionality. Next up is international law such as international conventions/treaties which cover everything from aviation to cross-border criminal investigation to intellectual property to doping in sport. Next up, at local community level residents of specific areas may have rules/ordinances/bye-laws which are essentially Acts that apply to a specific geographic area. There may be a compendium of these, often referred to as a "Municipal Code" in the case of cities. I think that just about wraps up the sources of law. It would be possible to fill many blog posts with more variations on these (inter-state compacts, federations/unions, executive orders, private members bills etc.). It would also be possible to fill many blog posts with how these all overlap differently in different situations (e.g. what law applies when there are different jurisdictions involved in an inter-jurisdictional contract.). I don't think it would be very helpful to do that however. Even scratching the surface as we have done here will hopefully serve to adequately illustrate they key point I would like to make with is this: the corpus of law applicable to any event E which occurred at time T is a textually complex, organizationally distributed, vast corpus of constantly changing material. Moreover, there is no central authority that manages it. It is not necessarily available as it was at time T - even if money is no object. To wrap up, let us summarize the potential access issues we have seen related to accessing the corpus of law at time T. Textual codification at time T might not be available (lack of codification, use of amendatory language in Acts. etc.) Practical access at time T may not be available (e.g. it is not practical to gather the paper versions of all court reports for all the caselaw, even if theoretically freely available.) Access rights at Time T may not be available (e.g. incorporated-by-reference rulebooks referenced in regulations) All three access issues can apply up and down the scale of location specificity from municipal codes/bye-laws, regulations/statutory instruments, Acts/Statutes, case law, union/federation law to international law and, most recently, space law[3]. We are going to glide serenely over the top of these access issues as the solutions to them are not technical in nature. Next we turn to this key question: Given a corpus of law at time T, how can we determi[...]



What is law? - Part 5

Wed, 29 Mar 2017 11:02:00 +0000

Previously: What is law? - Part 4 The Judicial Branch is where the laws and regulations created by the legislative and executive branches make contact with the world at large. The most common way to think of the judiciary is as the public forum where sentences/fines for not abiding by the law are handed down and as the public forum where disputes between private parties can be adjudicated by a neutral third party. This is certainly a major part of it but it is also the place where law gets clarified with finer and finer detail over time, in USA-style and UK-style "common law" legal systems. I like to think of the judicial branch as being a boundary determinator for legal matters. Any given incident e.g. a purported incident of illegal parking, brings with it a set of circumstances unique to that particular incident. Perhaps the circumstances in question are such that the illegal parking charge gets thrown out, perhaps not. Think of illegal parking as being – at the highest level – a straight line, splitting a two dimensional plane into two parts. Circumstances to the left of the line make the assertion of illegal parking true, circumstances to the right of the line make the assertion false. In the vast majority of legal matters, the dividing line is not that simple. I think of the dividing line as a Koch Snowflake[1]. The separation between legal and illegal start out as a simple Euclidian boundary but over time, the boundary becomes more and more complex as each new "probe" of the boundary (a case before the courts), more detail to the boundary is added. Simple put, the law is a fractal[2]. Even if a boundary starts out as a simple line segment separating true/false, it can become more complex with every new case that comes to the courts. Moreover, between any two sets of circumstances for a case A and B, there are an infinity of circumstances that are in some sense, in between A and B. Thus an infinity of new data points that can be added between A and B over time. Courts record their judgments in documents known collectively as “case law”. The most important thing about case law in our focus areas of USA-style and UK-style legal systems is that it is actually law. It is not just a housekeeping exercise, recording the activity of the courts. Each new piece of case law produced at time T, serves as an interpretation of the legal corpus at time T. That corpus consists of the Acts/Statutes in force, Regulations/Statutory Instruments in force *and* all other caselaw in force at time T. This is the legal concept of precedent, also known as stare decesis[3]. The courts strive, first and foremost, for consistency with precedents. A lot of weight is attached to arriving at judgements in new cases that are consistent with the judgements in previous cases. The importance of this cannot be over-estimated in understanding law from a computational perspective. Where is the true meaning of law to be found in common law jurisdictions? It is found in the case law! - not the Acts or the regulations/Statutory Instruments. If you are reading an Act or a regulation and are wondering what it actually means, the place to go is the case law. The case law, in a very real sense, is the place where the actual meaning of law is spelled out. From a linguistics perspective you can think of this in terms of the pragmatics counterpart to grammar/syntax. Wittgenstein fans can think of it as “language is use”. i.e. the true meaning of language can be found in how it is actually used in the real world. Logical Positivists might think of it as a behaviorist approach to meaning. That is, meaning comes from behavior. To understa[...]



What is law? - Part 4

Mon, 27 Mar 2017 15:45:00 +0000

Previously: What is law? - Part 3 Now we will turn our attention to the second part of the legal corpus, namely regulations/statutory instruments. I think of this material as fleshing out of the material coming in the form of Acts from the Legislature/Parliament. Acts can be super-specific and self contained, but they can also be very high level and delegate finer detail to government agencies to work out and fill in the details. Acts that do this delegation are known as "enabling acts" and the fine detail work takes the form of regulations (USA terminology) or Statutory Instruments (UK terminology). The powers delegated to executive branch agencies by enabling Acts can be quite extensive and the amount of review done by the Legislature/Parliament differs a lot across different jurisdictions. In some jurisdictions, there is no feedback loop back to the Legislature/Parliament at all. In others, all regulation/statutory instruments must pass a final approval phase back in the Legislature/Parliament. As with the Acts, the regulations go through a formal promulgation process - typically being activated by public notice in a Government gazette/register publication. As with the Acts, an official compendium of regulations may or may not be produced by Government itself and if it exists, it may lag behind the production of new Regulations/Statutory Instruments by months or even years. As with Acts, third party publishers often add value by keeping a corpus of regulations/SIs up to date with each register/gazette publication (often a weekly publication). One useful rough approximation is to conceptualize the Regulations/Statutory Instruments as appendices to Acts. Just as with any other type of publication, a full understanding of the text at time T requires a full understanding of the appendices at time T. In other word, to understand the Act at time T you need the Regulations/Statutory Instruments at time T. This brings us to the first significant complication. The workflows and publication cycles for the Acts and the Regulations/Statutory Instruments are different, and the organizations doing the work are different, resulting in a work synchronization and tracking challenge. Tracking Acts is not enough to understand the Acts. You need to track Regulations/Statutory Instruments too and keep the two in sync with each other. The next complication comes from the nature of the Regulations/Statutory Instruments themselves. When the need arises for very detailed knowledge about some regulated activity, there is often a separate association/guild/institute of specialists in that regulated activity. Sometimes, the rules/guidelines in use by the separate entity can become part of the law by being incorporated-by-reference into the regulations/statutory instruments[1]. Sometimes, the separate association/guild/institute is formally given powers to regulate and becomes what is known as a Self Regulatory Organization (SRO)[2]. The difficulty this presents for the legal decision-making box we are creating in our conceptual model of law is that this incorporated-by-reference material may not be freely available in the same way that the Acts and Regulations/Statutory Instruments are generally freely available (at least in unconsolidated forms). In Part 1, reference was made to the legal concept that "ignorance of the law is no defense". Well, you can see the potential problem here with material that is incorporated-by-reference. If I can only read the incorporated-by-reference aspects of the legal corpus at time T by paying money to access them, then the corpus of law it[...]



What is law? - Part 3

Thu, 23 Mar 2017 16:15:00 +0000

Previously : What is Law? - Part 2. The corpus of law - the stuff we all, in principle, have access to and all need to comply with, is not, unfortunately a nice tidy bundle of materials managed by a single entity. Moreover, the nature of the bundle itself, differs between jurisdictions. Ireland is quite different from Idaho. Scotland is quite different from the Seychelles. Jersey is quite different from Japan, and so on. I will focus here on US and UK (Westminister)-style legal corpora to keep the discussion manageable in terms of the diversity. Even then, there are many differences in practice and terminology all the way up and down the line from street ordinances to central government to international treaties and  everything in between. I will use some common terminology but bear in mind that actual terminology and practice in your particular part of the world will very likely be different in various ways, but hopefully not in ways  that invalidate the conceptual model we are seeking to establish. In general, at the level of countries/states, there are three main sources of law that make up the legal corpus. These are the judiciary, the government agencies and the legislature/parliament. Let us start with the Legislature/Parliament. This is the source of new laws and amendments to the law in the form of Acts. These start out as draft documents that go through a consideration, amendment and voting process before they become actual law. In the USA, it is common for these Acts to be consolidated into a "compendium", typically referred to as "The Statutes" or "The Code". The Statutes are typically organized according to some thematic breakdown into separate "titles" e.g. Company Law, Environmental Law and so on. In the UK/Westminster-type of Parliament, the government itself does not produce thematic compendia. Instead, the Acts are a cumulative corpus. So, to understand, for example, criminal law, it may be necessary to look at many different Acts, going back perhaps centuries to get the full picture of the "Act" actually in force. In UK-style systems, areas of law may get consolidated periodically through the creation of so-called "consolidations"/"re-statements". These essentially take an existing set of Acts that are in force, repeal them all and replace them with a single text that is a summation of the individual Acts that it repeals.[1] It is common for third party publishers to step in and help practitioners of particular areas of law by doing unofficial consolidations to make the job of finding the law in a jurisdiction easier. Depending on how volatile the area of law is in terms of change, the publisher might produce an update every month, every quarter, every year etc. In the USA, most US states do a consolidation in-house in the legislature when  they produce The Statutes/Code. In a similar manner to third party publishers, this corpus is updated according to a cycle, but it is typically a longer cycle - every year or two years. So here we get to our first interesting complication with respect to being able to access the law emanating from Legislatures/Parliaments that is in force at any time T. It is very likely that no existing compendium produced by the government itself, is fully up to date with respect to time T. There are a number of distinct reasons for this. Firstly, for Parliaments that do not produce "compendiums", there may not be an available consolidation/re-statement at time T. Therefore, it is necessary to find a set of Acts that were in force at time T, which then need to be read together to understand what the law was at time T. Secondly, f[...]



What is law? - Part 2

Wed, 22 Mar 2017 15:04:00 +0000

Previously: What is law? - Part 1. The virtual legal reasoning box we are imagining will clearly need to either contain the data it needs, or be able to reach outside of the box and access whatever data it needs for its legal analysis. In other words, we can imagine the box having the ability to pro-actively reach out and grab legal data from the outside world when it needs it. And/or we can also imagine the box directly storing data so that it does not need to reach out and get it. This brings us to the first little conceptual maneuver we are going to make in order to  make reasoning about this whole thing a bit easier. Namely, we are going to treat all legal data that ends up inside the box for the legal analysis as having arrived there from somewhere else. In other words, we don't have to split our thinking into stored-versus-retrieved legal data. All data leveraged by the legal reasoning box is, ultimately, retrieved from somewhere else. It may be that for convenience, some of the retrieved data is also stored inside the box but that is really just an optimization - a form of data caching - that we are not going to concern ourselves with at an architectural level as it does not impact the conceptual model. A nice side effect of this all-data-is-external conceptualization is that it mirrors how the real world of legal decision making in a democracy is supposed to work. That is, the law itself does not have any private data component. The law itself is a corpus of materials available (more on this availability point later!) to all those who must obey the law. Ignorance of the law is no defense.[1] The law is a body of knowledge that is"out there" and we all, in principle, have access to the laws we must obey. When a human being is working on a legal analysis, they do so by getting the law from "out there" into their brains for consideration. In other words, the human brain acts as a cache for legal materials during the analysis process. If the brain forgets, the material can be refreshed and nothing is lost. If my brain and your brain are both reaching out to find the law at time T, we both - in principle - are looking at exactly the same corpus of knowledge. I am reminded of John Adams statement that government should be "A government of laws, not of men."[2] i.e. I might have a notion of what is legal and you might have a different notion of what is legal but because the law is "out there" - external to both of us - we can both be satisfied that we are both looking at the same corpus of law which is fully external to both of us. We may well interpret it differently, but that is another matter, we will be returning to later. I am also reminded of Ronald Dworkin's Law as Integrity[3] which conceptualizes law as a corpus that is shared by and interpreted for, the community that creates it. Again, the word "interpretation" comes up, but that is another days work. One thing at a time... So what actually lives purely inside the box if the law itself does not? Well, I conceptualize it as the legal analysis apparatus itself, as opposed to any materials consumed by that apparatus. Why do I think of this as being inside and not outside the box? Primarily because it reflects how the real world of law actually works. A key point, indeed a feature, of the world of law, is that it is not based on one analysis box. It is, in fact lots and lots of boxes. One for each lawyer and each judge and each court in a jurisdiction... Legal systems are structured so that these analysis boxes can be chained together in an escalation chain (e.g. district courts, appeal courts,[...]



What is law? - Part 1

Wed, 15 Mar 2017 18:55:00 +0000

Just about seven years ago now, I embarked on a series of blog posts concerning the nature of legislatures/parliaments. Back then, my goal was to outline a conceptual model of what goes on inside a legislature/parliament in order to inform the architecture of computer systems to support their operation.

The goal of this series of posts is to outline a conceptual model of what law actually is and how it works when it gets outside of the legislatures/parliaments and is used in the world at large.

I think now is a good time to do this because there is growing interest around automation "downstream" of the legislative bodies. One example is GRC - Governance, Risk & Compliance and all the issues that surround taking legislation/rules/regulations/guidance and instantiating it inside computer systems. Another example is Smart Contracts  - turning legal language into executable computer code. Another example is Chatbots such as DoNotPay which encode/interpret legal material in a "consultation" mode with the aid of Artificial Intelligence and Natural Language Processing. Another example is TurboTax and programs like it which have become de-facto sources of interpretation of legal language in the tax field.

There are numerous other fascinating areas where automation is having a big impact in the world of law. Everything from predicting litigation costs to automating discovery to automating contract assembly. I propose to skip over these for now, and just concentrate on a single question which is this:
      If a virtual "box" existed that could be asked questions about legality of an action X, at some time T, what would need to be inside that box in order for it to reflect the real world equivalent of asking a human authority the same question?
If this thought experiment reminds you of John Searle's Chinese Room Argument then good:-) We are going to go inside that box. We are taking with us Nicklaus Wirth's famous aphorism that Algorithms + Data Structures = Programs. We will need a mix of computation (algorithmics) and data structures but let us start with the data sources because it is easiest of two.

What data (and thus data structures) do we need to have inside the box? That is the subject of the next post in this series.

What is law? - Part 2.





Custom semantics inside HTML containers

Mon, 27 Feb 2017 13:58:00 +0000

This article of mine from 2006 (I had to dig it out of the way back machine!) Master Foo's Taxation Theory of Microformats came back to mind today when I read this piece Beyond XML: Making Books with HTML. It is gratifying to see this pattern start to take hold. I.e. leveraging an existing author/edit toolchain rather than building a new one. We do this all the time in Propylon, leveraging off-the-self toolsets supporting flexible XML document models (XHTML, .docx, .odt) but encoding the semantics and the business rules we need in QA/QC pipelines. Admittedly, we are mostly dealing with complex, messy document types like legislation, professional guidance, policies, contracts etc. but then again, if your data set is not messy, you might be better off using a relational database to model your data and use the relational model to drive your author/edit sub-system in the classic record/field-oriented style.



Paper Comp Sci Classics

Mon, 20 Feb 2017 10:02:00 +0000

Being a programmer/systems architect/whatever brings with it a big reading load just to stay current. It used to be the case that this, for me, involved consuming lots of physical books and periodicals. Nowadays, less so because there is so much good stuff online. The glory-days of paper-based publications are never coming back so I think its worth taking a moment to give a shout out to some of the classics.

My top three comp sci books, the ones I will never throw out are:
- The C Programming Language by Kernighan and Ritchie
- Structure and Interpretation of Computer Programs, Abelson and Sussman
- Godel, Escher, Bach, Hofstadter

Sadly, I did dump a lot of classic magazines:-/ Byte, Dr Dobbs, PCW....

Your turn:-)





ChatOps, DevOps, Pipes and Chomsky

Fri, 27 Jan 2017 12:39:00 +0000

ChatOps is an area I am watching closely, not because I have a core focus on DevOps per se, but because Conversational User Interfaces is a very interesting area to me and ChatOps is part of that.

Developers - as a gene pool - have a habit of developing very interesting tools and techniques for doing things that save time down in the "plumbing". Deep down the stack where no end-user ever treads.

Some of these tools and techniques stay there forever. Others bubble up and become important parts of the end-user-facing feature sets of applications and/or important parts of the application architecture, one level beneath the surface.

Unix is full of programs, patterns etc. that followed this path. This is from Doug McIllroy in *1964*

"We should have some ways of coupling programs like garden hose--screw in another segment when it becomes when it becomes necessary to massage data in another way."

That became the Unix concept of a bounded buffer "pipe" and the now legendary "|" command line operator.

For a long time, the Unix concept of pipes stayed beneath the surface. Today, it is finding its way into front ends (graphics editing pipelines, audio pipelines) and into applications architectures (think Google/Amazon/Microsoft cloud-hosted pipelines.)

Something similar may happen with Conversational User Interfaces. Some tough nuts might end up being cracked down in the plumbing layers by DevOps people, for their own internal use, and then bubble up....

The one that springs to mind is that we will need to get to the point where hooking in new sources/sinks into ChatBots doesn't involve breaking out the programming tools and the API documentation. The CUI paradigm itself might prove to be part of the solution to the integration problem.

For example, what if a "zeroconf" for any given component was that you could be guaranteed to be able to chat to it - not with a fully fledged set of application-specific dialog commands, but with a basis set of dialog components from which a richer dialog could be bootstrapped.

Unix bootstrapped a phenomenal amount of integration power from the beautifully simple concept of standard streams for input, output and error. A built-in lingustic layer on top of that for chatting about how to chat, is an interesting idea. Meta chat. Talks-about-talks. That sort of thing.

Dang, just as Chomsky's universal grammar seems to be gathering dissenters...:-)





The new Cobol, the new Bash

Wed, 21 Dec 2016 11:46:00 +0000

Musing, as I do periodically, on what the Next Big Thing in programming will be, I landed on a new (to me) thought.

One of the original design goals of Cobol was English-like nontechnical readability. As access to NLP and AI continues to improve, I suspect we will see a fresh interest in "executable pseudo-code" approaches to programming languages.

In parallel with this, I think we will see a lot of interest in leveraging NLP/AI from chat-bot CUI's in programming command line environments such as the venerable bash shell.

It is a short step from there I think, to a read-eval-print loop for an English-like programming environment that is both the programming language and the operating system shell.

Hmmm....




Recommender algorithms R Us

Fri, 25 Nov 2016 10:16:00 +0000

Tommorow, at congregation.ie, my topic is recommender algorithms, although, at first blush, it might look like my topic is the role of augmented reality in hamster consumption.

A Pokemon ate my hamster.




J2EE revisited

Fri, 04 Nov 2016 15:31:00 +0000

The sheer complexity of the Javascript eco-system at present, is eerily reminiscent of the complexity that caused many folk to balk at J2EE/DCOM back in the day.

Just sayin'.