Preview: Sean McGrath
Sean McGrath's Weblog.
Last Build Date: Fri, 24 Mar 2017 11:32:36 +0000
What is law? - Part 3
Thu, 23 Mar 2017 16:15:00 +0000
Previously : What is Law? - Part 2.
The corpus of law - the stuff we all, in principle, have access to and all need to comply with, is not, unfortunately a nice tidy bundle of materials managed by a single entity. Moreover, the nature of the bundle itself, differs between jurisdictions. Ireland is quite different from Idaho. Scotland is quite different from the Seychelles. Jersey is quite different from Japan, and so on.
I will focus here on US and UK (Westminister)-style legal corpora to keep the discussion manageable in terms of the diversity. Even then, there are many differences in practice and terminology all the way up and down the line from street ordinances to central government to international treaties and everything in between. I will use some common terminology but bear in mind that actual terminology and practice in your particular part of the world will very likely be different in various ways, but hopefully not in ways that invalidate the conceptual model we are seeking to establish.
In general, at the level of countries/states, there are three main sources of law that make up the legal corpus. These are the judiciary, the government agencies and the legislature/parliament.
Let us start with the Legislature/Parliament. This is the source of new laws and amendments to the law in the form of Acts. These start out as draft documents that go through a consideration, amendment and voting process before they become actual law. In the USA, it is common for these Acts to be consolidated into a "compendium", typically referred to as "The Statutes" or "The Code". The Statutes are typically organized according to some thematic breakdown into separate "titles" e.g. Company Law, Environmental Law and so on.
In the UK/Westminster-type of Parliament, the government itself does not produce thematic compendia. Instead, the Acts are a cumulative corpus. So, to understand, for example, criminal law, it may be necessary to look at many different Acts, going back perhaps centuries to get the full picture of the "Act" actually in force. In UK-style systems, areas of law may get consolidated periodically through the creation of so-called "consolidations"/"re-statements". These essentially take an existing set of Acts that are in force, repeal them all and replace them with a single text that is a summation of the individual Acts that it repeals.
It is common for third party publishers to step in and help practitioners of particular areas of law by doing unofficial consolidations to make the job of finding the law in a jurisdiction easier.
Depending on how volatile the area of law is in terms of change, the publisher might produce an update every month, every quarter, every year etc. In the USA, most US states do a consolidation in-house in the legislature when they produce The Statutes/Code. In a similar manner to third party publishers, this corpus is updated according to a cycle, but it is typically a longer cycle - every year or two years.
So here we get to our first interesting complication with respect to being able to access the law emanating from Legislatures/Parliaments that is in force at any time T. It is very likely that no existing compendium produced by the government itself, is fully up to date with respect to time T. There are a number of distinct reasons for this.
Firstly, for Parliaments that do not produce "compendiums", there may not be an available consolidation/re-statement at time T. Therefore, it is necessary to find a set of Acts that were in force at time T, which then need to be read together to understand what the law was at time T.
Secondly, for Legislatures that produce compendia in the form of Statutes, these typically lag behind the Acts by anything from months to years. Typically, when a Legislature is "in session", busily working on new Acts, it is not working on consolidating them as they pass into law. Instead, they are accumulated into a publication, typically called the Session Laws, and the consolidation process happens after the session has ended. T[...]
What is law? - Part 2
Wed, 22 Mar 2017 15:04:00 +0000
Previously: What is law? - Part 1.
The virtual legal reasoning box we are imagining will clearly need to either contain the data it needs, or be able to reach outside of the box and access whatever data it needs for its legal analysis. In other words, we can imagine the box having the ability to pro-actively reach out and grab legal data from the outside world when it needs it. And/or we can also imagine the box directly storing data so that it does not need to reach out and get it.
This brings us to the first little conceptual maneuver we are going to make in order to make reasoning about this whole thing a bit easier. Namely, we are going to treat all legal data that ends up inside the box for the legal analysis as having arrived there from somewhere else. In other words, we don't have to split our thinking into stored-versus-retrieved legal data. All data leveraged by the legal reasoning box is, ultimately, retrieved from somewhere else. It may be that for convenience, some of the retrieved data is also stored inside the box but that is really just an optimization - a form of data caching - that we are not going to concern ourselves with at an architectural level as it does not impact the conceptual model.
A nice side effect of this all-data-is-external conceptualization is that it mirrors how the real world of legal decision making in a democracy is supposed to work. That is, the law itself does not have any private data component. The law itself is a corpus of materials available (more on this availability point later!) to all those who must obey the law. Ignorance of the law is no defense.
The law is a body of knowledge that is"out there" and we all, in principle, have access to the laws we must obey. When a human being is working on a legal analysis, they do so by getting the law from "out there" into their brains for consideration. In other words, the human brain acts as a cache for legal materials during the analysis process. If the brain forgets, the material can be refreshed and nothing is lost. If my brain and your brain are both reaching out to find the law at time T, we both - in principle - are looking at exactly the same corpus of knowledge.
I am reminded of John Adams statement that government should be "A government of laws, not of men." i.e. I might have a notion of what is legal and you might have a different notion of what is legal but because the law is "out there" - external to both of us - we can both be satisfied that we are both looking at the same corpus of law which is fully external to both of us. We may well interpret it differently, but that is another matter, we will be returning to later.
I am also reminded of Ronald Dworkin's Law as Integrity which conceptualizes law as a corpus that is shared by and interpreted for, the community that creates it. Again, the word "interpretation" comes up, but that is another days work. One thing at a time...
So what actually lives purely inside the box if the law itself does not? Well, I conceptualize
it as the legal analysis apparatus itself, as opposed to any materials consumed by that apparatus. Why do I think of this as being inside and not outside the box? Primarily because it reflects how the real world of law actually works. A key point, indeed a feature, of the world of law, is that it is not based on one analysis box. It is, in fact lots and lots of boxes. One for each lawyer and each judge and each court in a jurisdiction...
Legal systems are structured so that these analysis boxes can be chained together in an escalation chain (e.g. district courts, appeal courts, supreme courts etc.) The decision issued by one box can be appealed to a higher box in the decision-making hierarchy. Two boxes at the same level in the hierarchy might look at the facts of a case and arrive at diametrically opposing opinions. Two judges in the same court, looking at the same case might also come to diametrically different opinions of the same set of facts presented to the court.
This is the point at which most IT [...]
What is law? - Part 1
Wed, 15 Mar 2017 18:55:00 +0000
Just about seven years ago now, I embarked on a series of blog posts
concerning the nature of legislatures/parliaments. Back then, my goal was to outline a conceptual model of what goes on inside a legislature/parliament in order to inform the architecture of computer systems to support their operation.
The goal of this series of posts is to outline a conceptual model of what law actually is and how it works when it gets outside of the legislatures/parliaments and is used in the world at large.
I think now is a good time to do this because there is growing interest around automation "downstream" of the legislative bodies. One example is GRC
- Governance, Risk & Compliance and all the issues that surround taking legislation/rules/regulations/guidance and instantiating it inside computer systems. Another example is Smart Contracts
- turning legal language into executable computer code. Another example is Chatbots such as DoNotPay
which encode/interpret legal material in a "consultation" mode with the aid of Artificial Intelligence and Natural Language Processing. Another example is TurboTax and programs like it which have become de-facto sources of interpretation of legal language in the tax field.
There are numerous other fascinating areas where automation is having a big impact in the world of law. Everything from predicting litigation costs to automating discovery to automating contract assembly. I propose to skip over these for now, and just concentrate on a single question which is this:
If a virtual "box" existed that could be asked questions about legality of an action X, at some time T, what would need to be inside that box in order for it to reflect the real world equivalent of asking a human authority the same question?
If this thought experiment reminds you of John Searle's Chinese Room Argument
then good:-) We are going to go inside that box. We are taking with us Nicklaus Wirth's famous aphorism that Algorithms + Data Structures = Programs
. We will need a mix of computation (algorithmics) and data structures but let us start with the data sources because it is easiest of two.
What data (and thus data structures) do we need to have inside the box? That is the subject of the next post in this series.
What is law? - Part 2
Custom semantics inside HTML containers
Mon, 27 Feb 2017 13:58:00 +0000
This article of mine from 2006 (I had to dig it out of the way back machine!) Master Foo's Taxation Theory of Microformats
came back to mind today when I read this piece Beyond XML: Making Books with HTML
It is gratifying to see this pattern start to take hold. I.e. leveraging an existing author/edit toolchain rather than building a new one. We do this all the time in Propylon
, leveraging off-the-self toolsets supporting flexible XML document models (XHTML, .docx, .odt) but encoding the semantics and the business rules we need in QA/QC pipelines.
Admittedly, we are mostly dealing with complex, messy document types like legislation, professional guidance, policies, contracts etc. but then again, if your data set is not messy, you might be better off using a relational database to model your data and use the relational model to drive your author/edit sub-system in the classic record/field-oriented style.
Paper Comp Sci Classics
Mon, 20 Feb 2017 10:02:00 +0000
Being a programmer/systems architect/whatever brings with it a big reading load just to stay current. It used to be the case that this, for me, involved consuming lots of physical books and periodicals. Nowadays, less so because there is so much good stuff online. The glory-days of paper-based publications are never coming back so I think its worth taking a moment to give a shout out to some of the classics.
My top three comp sci books, the ones I will never throw out are:
- The C Programming Language by Kernighan and Ritchie
- Structure and Interpretation of Computer Programs, Abelson and Sussman
- Godel, Escher, Bach, Hofstadter
Sadly, I did dump a lot of classic magazines:-/ Byte, Dr Dobbs, PCW....
ChatOps, DevOps, Pipes and Chomsky
Fri, 27 Jan 2017 12:39:00 +0000
ChatOps is an area I am watching closely, not because I have a core focus on DevOps per se, but because Conversational User Interfaces is a very interesting area to me and ChatOps is part of that.
Developers - as a gene pool - have a habit of developing very interesting tools and techniques for doing things that save time down in the "plumbing". Deep down the stack where no end-user ever treads.
Some of these tools and techniques stay there forever. Others bubble up and become important parts of the end-user-facing feature sets of applications and/or important parts of the application architecture, one level beneath the surface.
Unix is full of programs, patterns etc. that followed this path. This is from Doug McIllroy in *1964*
"We should have some ways of coupling programs like garden hose--screw in another segment when it becomes when it becomes necessary to massage data in another way."
That became the Unix concept of a bounded buffer "pipe" and the now legendary "|" command line operator.
For a long time, the Unix concept of pipes stayed beneath the surface. Today, it is finding its way into front ends (graphics editing pipelines, audio pipelines) and into applications architectures (think Google/Amazon/Microsoft cloud-hosted pipelines.)
Something similar may happen with Conversational User Interfaces. Some tough nuts might end up being cracked down in the plumbing layers by DevOps people, for their own internal use, and then bubble up....
The one that springs to mind is that we will need to get to the point where hooking in new sources/sinks into ChatBots doesn't involve breaking out the programming tools and the API documentation. The CUI paradigm itself might prove to be part of the solution to the integration problem.
For example, what if a "zeroconf" for any given component was that you could be guaranteed to be able to chat to it - not with a fully fledged set of application-specific dialog commands, but with a basis set of dialog components from which a richer dialog could be bootstrapped.
Unix bootstrapped a phenomenal amount of integration power from the beautifully simple concept of standard streams for input, output and error. A built-in lingustic layer on top of that for chatting about how to chat, is an interesting idea. Meta chat. Talks-about-talks. That sort of thing.
Dang, just as Chomsky's universal grammar seems to be gathering dissenters...:-)
The new Cobol, the new Bash
Wed, 21 Dec 2016 11:46:00 +0000
Musing, as I do periodically, on what the Next Big Thing in programming will be, I landed on a new (to me) thought.
One of the original design goals of Cobol was English-like nontechnical readability. As access to NLP and AI continues to improve, I suspect we will see a fresh interest in "executable pseudo-code" approaches to programming languages.
In parallel with this, I think we will see a lot of interest in leveraging NLP/AI from chat-bot CUI's in programming command line environments such as the venerable bash shell.
It is a short step from there I think, to a read-eval-print loop for an English-like programming environment that is both the programming language and the operating system shell.
Fri, 04 Nov 2016 15:31:00 +0000
Nameless things within namless things
Wed, 19 Oct 2016 13:32:00 +0000
So, I got to thinking again about one of my pet notions - names/identifiers - and the unreasonable amount of time IT people spend naming things, then mapping them to other names, then putting the names into categories that are .... named.... aliasing, mapping, binding, bundling, unbundling, currying, lamdizing, serializing, reifying, templating, substituting, duck-typing, shimming, wrapping...
We do it for all forms of data. We do it for all forms of algorithmic expression. We name everything. We name 'em over and over again. And we keep changing the names as our ideas change, and the task to be accomplished changes, and the state of the data changes....
It gets overwhelming. And when it does, we have a tendency to make matters worse by adding another layer of names. A new data description language. A new DSL. A new pre-processor.
Adding a new layer of names often *feels* like progress. But it often is not, in my experience.
Removing the need for layers of names is one of the great skills in IT in my opinion. It is so undervalued, the skill doesn't have um, a, name.
I am torn between thinking that this is just *perfect* and thinking it is unfortunate.
Wed, 05 Oct 2016 16:16:00 +0000
It occurred to me today that the time-honored mathematical technique of taking a problem you cannot solve and re-formulating it as a problem (perhaps in a completely different domain) that you can solve, is undergoing a sort of cambrian explosion.
For example, using big data sets and deep learning, machines are getting really good at parsing images of things like cats
The more general capability is to use a zillion images of things-like-X to properly classify a new image being either like-an-X or not-like-an-X, for any X you like.
But X is not limited to things we can take pictures of. Images don't have to come from cameras. We can create images from any abstraction we like. All we need is an encoding strategy....a Semantic CODEC if you will.
We seem to be hurtling towards large infrastructure that is specifically optimized for image classification. It follows, I think, that if you can re-cast a problem into an image recognition problem - even if it has nothing to do with images - you get to piggy-back on that infrastructure.
The next big exponential step in AI
Wed, 21 Sep 2016 13:11:00 +0000
Assuming, for the moment, that the current machine learning bootstrap
pans out, the next big multiplier is already on the horizon.
As more computing is expressed in forms that require super-fast, super-scalable linear algebra algorithms (a *lot* of machine learning techniques do this), it becomes very appealing to find ways to execute them on quantum computers. Reason being, exponential increases are possible in terms of parallel execution of certain operations.
There is a fine tradition in computing of scientists getting ahead of what today's technology can actually do. Charles Babbage, Dame Ada Lovlace, Alan Turing, Doug Englebart, Vannever Bush, all worked out computing stuff that was way ahead of the reality curve, and then reality caught up with their work.
If/when quantum computing gets out of the labs, the algorithms will already be sitting in the Machine Learning libraries ready to take advantage of them, because forward looking researchers are working them out, now.
In other words, it won't be a case of "Ah, cool! We have access to a quantum computer! Lets spend a few years working out how best to use them.". Instead it will be "Ah, cool!. We have access to a quantum computer! Lets deploy all the stuff we have already worked out and implemented, in anticipation of this day."
It reminds me of the old adage (attributable to Poyla, I think) about "Solving for N". If I write an algorithm that can leverage N compute nodes, then it does not matter that I might only be able to deploy it with N = 1 because of current limitations. As soon as new compute nodes become available, I can immediately set N = 2 or 2000 or 20000000000 and run stuff.
With the abstractions being crafted around ML libraries today, the "N" is being prepped for some very large potential values of N.
Deep learning, Doug Englebart and Jimmy Hendrix
Thu, 15 Sep 2016 10:49:00 +0000
The late great Doug Englebart did foundational work in many areas of computing and was particularly interested in the relationship between human intelligence and machine intelligence.
Even a not-so-smart machine can augment human productivity if even simple cognitive tasks can be handled by the machine. Reason being, machines are super fast. Super fast can compensate for "not-so-smart" in many useful domains. Simple totting up of figures, printing lots of copies of a report, shunt lots of data around, whatever.
How do you move a machine from "not-so-smart" to "smarter" for any given problem? The obvious way is to get the humans to do the hard thinking and come up with a smarter way. It is hard work because the humans have to be able to oscillate between smart thinking and thinking like not-so-smart machines because ultimately the smarts have to be fed to the not-so-smart machine in grindingly meticulous instructions written in computer-friendly (read "not-so-smart") programs. Simple language because machines can only grok simple language.
The not-so-obvious approach is to create a feedback loop where the machine can change its behavior over time by feeding outputs back into inputs. How to do that? Well, you got to start somewhere so get the human engineers to create feedback loops and teach them to the computer. You need to do that to get the thing going - to bootstrap
then stand back....
Things escalate pretty fast when you create feedback loops! If the result you get is a good one, it is likely to be *a lot* better than your previous best because feedback loops are exponential.
Englebart's insight was to recognize that the intelligent, purposeful creation of feedback loops can be a massive multiplier : both for human intellect at the species level, and at the level of machines. When it works, it can move the state of the art of any problem domain forward, not by a little bit, but by *a lot*.
A human example would be the invention of writing. All of a sudden knowledge could survive through generations and could spread exponentially better than it could by oral transmission.
The hope and expectation around Deep Learning is that it is basically a Doug Englebart Bootstrap for machine intelligence. A smart new feedback loop in which the machines can now do a vital machine intelligence step ("feature identification") that previously required humans. This can/should/will move things forward *a lot* relative to the last big brohuha around machine intelligence in the Eighties.
The debates about whether or not this is really "intelligence" or just "a smarter form of dumb" will rage on in parallel, perhaps forever.
Relevance to Jimmy Hendrix? See https://www.youtube.com/watch?v=JMyoT3kQMTg
The scourge of easily accessible abstraction
Thu, 11 Aug 2016 11:57:00 +0000
In software, we are swimming in abstractions. We also have amazingly abstract tools that greatly enhance our ability to create even more abstractions.
"William of Ockham admonished philosophers to avoid multiplying entities, but computers multiple them faster than his razor can shave." -- John F. Sowa, Knowledge Representation.
Remember that the next time you are de-referencing a URL to get the address of a pointer to a factory that instantiates an instance of meta-class for monad constructors...
Sebastian Rahtz, RIP
Wed, 03 Aug 2016 10:57:00 +0000
It has just now come to my attention that Sebastian Rahtz passed away earler this year.
RIP. Fond memories of conversations on the xml-dev mailing list.
Software self analysis again
Wed, 20 Jul 2016 14:11:00 +0000
Perhaps a better example for the consciousness post
would have been to allow the application on the operating system to have access to the source code for the hypervisor two levels down. That way, the app could decide to change the virtualization of CPUs or the contention algorithms on the virtualized network interfaces and bootstrap itself a new hypervisor to host its own OS.
The question naturally arises, what scope has an app - or an OS - got for detecting that it is on a hypervisor rather than real hardware? If the emulation is indistinguishable, you cannot tell - by definition. At which point the emulated thing and the thing emulated have become indistinguishable. At which point you have artificially re-created that thing.
This is all well worn territory in the strong Vs weak AI conversations of course.
My favorite way of thinking about it is this:
1 - we don't understand consciousness and thus we cannot be sure we won't re-create it by happenstance, as we muck about with making computers act more intelligently.
2 - If we do create it, we likely won't know how we did it (especially since it is likely to be a gradual, multi-step thing rather than a big-bang thing)
3 - because we won't know what we did to create it, we won't know how to undo it or switch it off
4 - if it improves by iteration and it iterates a lot faster in silicon than we do in carbon, we could find ourselves a distant second in the earth-based intelligence ranking table rather quickly :-)
Best if we run all the electrical generation on the planet with analog methods and vacuum tubes so that we can at least starve it of electricity if push comes to shove:-)
English as an API
Sat, 16 Jul 2016 16:55:00 +0000
Like the last post, this I am filing under "speculative"
Chat bots strip away visual UI elements in favor of natural language in a good old fashioned text box.
Seems kind of retro but, perhaps something deeper is afoot. For the longest time, we have used the phrase "getting applications to talk to each other" as a sort of business-level way of saying, "get applications to understand each others APIs and/or data structures."
Perhaps, natural language - or a controlled version of natural language - will soon become a viable way of getting applications to talk to each other. I.e. chatbots chatting with other chatbots, by sending/receiving English.
One of the big practical upshots of that - if it transpires - is that non-programmers will have a new technique for wiring up disparate applications. I.e. talk to each of them via their chat interface, then gradually get them talking to each other...
The surprising role of cloud computing in the understanding of consciousness
Sat, 16 Jul 2016 16:00:00 +0000
I am filing this one under "extremely speculative".
I think it was Douglas Hofstadter's book "I am a strange loop" that first got me thinking about the the possible roles of recursion and self-reference in understanding consciousness.
Today - for no good reason - it occurred to me that if the Radical Plasticity Theory
is correct, to emulate/re-create consciousness we need to create the conditions for consciousness to arise. Doing that requires arranging a computing system that can observe every aspect of itself in operation.
For most of the history of computing, we have had a layer of stuff that the software could only be dimly aware of, called the hardware.
With virtualization and cloud computing, more and more of that hardware layer is becoming, itself, software and thus, in principle, open to fine grained examination by the software running on....the software, if you see what I mean.
To take an extreme example, a unix application could today be written that introspects itself, concludes that the kernel scheduler logic should be changed, writes out the modified source code for the new kernel, re-compiles it, boots a Unix OS image based on it, and transplant itself into a process on this new kernel.
 Emulation versus re-creation of consciousness. Not going there.
The subtle complexities of legal/contractual ambiguity
Mon, 20 Jun 2016 13:51:00 +0000
The law is not a set of simple rules and the rule of law is not - and arguably cannot -be reduced to a Turing Machine evaluating some formal expression of said rules.
A theme of mine
for some time has been how dangerous it is to junp to conclusions about the extent to which the process of law - and its expression in the laws themselves - can be looked upon purely in terms of a deductive logic system in disguise.
Laws, contracts etc. often contain ambiguities that are there on purpose. Some are tactical. Some are there in recognition of the reality that concepts like "fairness" and "reasonable efforts" are both useful and unquantifiable.
In short there are tactical, social and deep jurisprudence-related reasons for the presence ambiguity in laws/contracts.
Trying to remove them can lead to unpleasant results.
Case in point : the draining of millions of dollars from the DAO. See writeup on Bloomberg : Ethereum Smart Contracts
25 years if the Internet in Ireland - a personal recollection of the early days
Fri, 17 Jun 2016 13:25:00 +0000
So today is the Internets 25th anniversary in Ireland.
In 1991 I was working with a financial trading company, developing technical analysis software for financial futures traders in 8086 assembly language and C using PCs equipped with TMS34010
I cannot remember how exactly...possible through the Unix Users Group I ended up getting a 4800 KBS modem connection to a Usenet feed from Trinity via the SLIP protocol
Every day I would dial up
and download comp.text.sgml from Usenet onto my Sun Roadrunner
Not long thereafter, Ireland Online happened and I was then dialling up Furbo in the Gaeltacht of Connemara because it was the first access point to the WWW in Ireland.
I ditched my compuserv e-mail account not long after and became email@example.com on comp.text.sgml
So much has changed since those early days...and yet so much as stayed the same.
From textual authority to interpretive authority: the next big shift in legal and regulatory informatics
Fri, 20 May 2016 09:55:00 +0000
This paper : law and algorithms in the public domain from a journal on applied ethics is representative, I think, of the thought processes going on around the world at present regarding machine intelligence and what it means for law/regulation.
It seems to me that there has been a significant uptick in people from diverse science/philosophy backgrounds taking an interest in the world of law. These folks range from epistemologists to bioinformaticians to statisticians to network engineers. Many of them are looking at law/regulation through the eyes of digital computing and asking, basically, "Is law/regulation computation?" and also "If it is not currently computation, can it be? Will it be? Should it be?"
These are great, great questions. We have a long way to go yet in answering them. Much of the world of law and the world of IT is separated by a big chasm of mutual mis-understanding at present. Many law folk - with some notable exceptions - do not have a deep grasp of computing and many computing folk - with some notable exceptions - do not have a deep grasp of law.
Computers are everywhere in the world of law, but to date, they have primarily been wielded as document management/search&retrieval tools. In this domain, they have been phenomenally successful. To the point where significant textual authority has now transferred to digital modalities from paper.
Those books of caselaw and statute and so on, on the shelves, in the offices. They rarely move from the shelves. For much practical, day-to-day activity, the digital instantiations of these legislative artifiacts are normative and considered authoritative by practitioners. How often these days to legal researchers go back to the paper-normative books? Is it even possible anymore in a world where more and more paper publication is being replaced by cradle-to-grave digital media? If the practitioners and the regulators and the courts are all circling around a set of digital artifacts, does it matter any more if the digital artifact is identical to the paper one?
Authority is a funny thing. It is mostly a social construct. I wrote about this some years ago here: Would the real, authentic copy of the document please stand up? If the majority involved in the world of law/regulation use digital information resource X even though strictly speaking X is a "best efforts facsimile" of paper information resource Y, then X has de-facto authority even though it is not de-jure authoritative. (The fact that de-jure texts are often replaced by de facto texts in the world of jure - law! - is a self-reference that will likely appeal to anyone who has read The Paradox of Self Amendment by Peter Suber.
We are very close to being at the point with digital resources in law/regulation have authority for expression but it is a different kettle of fish completely to have expression authority compared to interpretive authority.
It is in this chasm between authority of expression and authority of interpretation that most of the mutual misunderstandings between law and computing will sit in the years ahead I think. On one hand, law folk will be too quick to dismiss what the machines can do in the interpretive space and IT people will be too quick to think the machines can quickly take over the interpretive space.
The truth - as ever - is somewhere in between. Nobody knows yet where the dividing line is but the IT people are sure to move the line from where it currently is (in legal "expression" space) to a new location (in lega[...]
From BSOD to UOD
Mon, 16 May 2016 14:08:00 +0000
I don't get many "Blue Screen of Death" type events these days : In any of the Ubuntu, Android, iOS, Window environments I interact with. Certainly not like the good old days when rebooting every couple of hours felt normal. (I used to keep my foot touching the side of my deskside machine. The vibrations of the hard disk used to be a good indicator of life back in the good old days. Betcha that health monitor wasn't considered in the move to SSDs. Nothing even to listen too these days, never mind touch.)
I do get Updates of Death though - and these are nasty critters!
For example, your machine auto-updates and disables the network connection leaving you unable to get at the fix you just found online....
Mon, 09 May 2016 16:13:00 +0000
is a thing. Wow.
As a thing, it is part of a bigger thing.
That bigger thing seems to be this: given enough cheap compute power, the time taken to perform zillions of iterations can be made largely irrelevant.
Start stupid. Just aim to be fractionally less stupid the next time round, and iterations will do the rest.
The weirdest thing about all of this for me is that if/when iterated algorithmic things start showing smarts, we will know the causal factors that lead to the increased smartness, but not the rationale for any individual incidence of smart-ness.
As a thing, that is part of a bigger thing.
That bigger thing is that these useful-but-unprovable things will be put to use in areas where humankind as previously expected the presence of explanation
. You know, rules, reasoning, all that stuff.
As a thing, that is part of a bigger thing.
That bigger thing is that in many areas of human endeavor it is either impossible to get explanations - (i.e. experts who know what to do, but cannot explain why in terms of rules.), or the explanations need to be taken with a pinch of post-hoc-ergo-propter-hoc salt, or a pinch or retroactive goal-setting salt.
As a thing, that is part of a bigger thing.
When the machines come, and start doing clever things but cannot explain why....
...they will be just like us.
Statistics and AI
Thu, 05 May 2016 09:44:00 +0000
We live at a time where there is more interest in AI than ever and it is growing every day.
One of the first things that happens when a genre of computing starts to build up steam is that pre-existing concepts get subsumed into the new genre. Sometimes, the adopted concepts are presented in a way that would suggest they are new concepts, created as part of the new genre. Sometimes they are. But sometimes they are not.
For example, I recently read some material that presented linear regression as a machine learning technique.
Now of course, regression has all sorts of important contributions to make to machine learning but it was invented/discovered long long before the machines came along.
Cutting the inconvenient protrusions from the jigsaw pieces
Thu, 14 Apr 2016 12:34:00 +0000
There is a school of thought that goes like this....
(1) To manage data means to put it in a database
(2) A 'database' means a relational database. No other database approach is really any good.
(3) If the data does not fit into the relational data model, well just compromise the data so that it does. Why? See item (1).
I have no difficulty whatsover with recommending relational databases where there is a good fit between the data, the problem to be solved, and the relational database paradigm.
Where the fit isn't good, I recommend something else. Maybe index flat files, or versioned spreadsheets, documents, a temporal data store....whatever feels least like I am cutting important protrusions off the data and off the problem to be solved.
However, whenever I do that, I am sure to have to answer the "Why not just store it in [Insert RDB name]?" question.
It is an incredibly strong meme in modern computing.