Subscribe: Sean McGrath
http://seanmcgrath.blogspot.com/rss/seanmcgrath.xml
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
bigger thing  complexity  computing  create  data  design  good  law  machine  new  paper  software  thing  things  time  world 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Sean McGrath

Sean McGrath



Sean McGrath's Weblog.



Last Build Date: Mon, 28 Nov 2016 19:11:10 +0000

 



Recommender algorithms R Us

Fri, 25 Nov 2016 10:16:00 +0000

Tommorow, at congregation.ie, my topic is recommender algorithms, although, at first blush, it might look like my topic is the role of augmented reality in hamster consumption.

A Pokemon ate my hamster.




J2EE revisited

Fri, 04 Nov 2016 15:31:00 +0000

The sheer complexity of the Javascript eco-system at present, is eerily reminiscent of the complexity that caused many folk to balk at J2EE/DCOM back in the day.

Just sayin'.




Nameless things within namless things

Wed, 19 Oct 2016 13:32:00 +0000

So, I got to thinking again about one of my pet notions - names/identifiers - and the unreasonable amount of time IT people spend naming things, then mapping them to other names, then putting the names into categories that are .... named.... aliasing, mapping, binding, bundling, unbundling, currying, lamdizing, serializing, reifying, templating, substituting, duck-typing, shimming, wrapping...

We do it for all forms of data. We do it for all forms of algorithmic expression. We name everything. We name 'em over and over again. And we keep changing the names as our ideas change, and the task to be accomplished changes, and the state of the data changes....

It gets overwhelming. And when it does, we have a tendency to make matters worse by adding another layer of names. A new data description language. A new DSL. A new pre-processor.

Adding a new layer of names often *feels* like progress. But it often is not, in my experience.

Removing the need for layers of names is one of the great skills in IT in my opinion. It is so undervalued, the skill doesn't have um, a, name.

I am torn between thinking that this is just *perfect* and thinking it is unfortunate.




Semantic CODECs

Wed, 05 Oct 2016 16:16:00 +0000

It occurred to me today that the time-honored mathematical technique of taking a problem you cannot solve and re-formulating it as a problem (perhaps in a completely different domain) that you can solve, is undergoing a sort of cambrian explosion.

For example, using big data sets and deep learning, machines are getting really good at parsing images of things like cats.

The more general capability is to use a zillion images of things-like-X to properly classify a new image being either  like-an-X or not-like-an-X, for any X you like.

But X is not limited to things we can take pictures of. Images don't have to come from cameras. We can create images from any abstraction we like. All we need is an encoding strategy....a Semantic CODEC if you will.

We seem to be hurtling towards large infrastructure that is specifically optimized for image classification. It follows, I think, that if you can re-cast a problem into an image recognition problem - even if it has nothing to do with images - you get to piggy-back on that infrastructure.

Hmmmmm.



The next big exponential step in AI

Wed, 21 Sep 2016 13:11:00 +0000

Assuming, for the moment, that the current machine learning bootstrap pans out, the next big multiplier is already on the horizon.

As more computing is expressed in forms that require super-fast, super-scalable linear algebra algorithms (a *lot* of machine learning techniques do this), it becomes very appealing to find ways to execute them on quantum computers. Reason being, exponential increases are possible in terms of parallel execution of certain operations.

There is a fine tradition in computing of scientists getting ahead of what today's technology can actually do. Charles Babbage, Dame Ada Lovlace, Alan Turing, Doug Englebart, Vannever Bush, all worked out computing stuff that was way ahead of the reality curve, and then reality caught up with their work.

If/when quantum computing gets out of the labs, the algorithms will already be sitting in the Machine Learning libraries ready to take advantage of them, because forward looking researchers are working them out, now.

In other words, it won't be a case of "Ah, cool! We have access to a quantum computer! Lets spend a few years working out how best to use them.". Instead it will be "Ah, cool!. We have access to a quantum computer! Lets deploy all the stuff we have already worked out and implemented, in anticipation of this day."

It reminds me of the old adage (attributable to Poyla, I think) about "Solving for N". If I write an algorithm that can leverage N compute nodes, then it does not matter that I might only be able to deploy it with N = 1 because of current limitations. As soon as new compute nodes become available, I can immediately set N = 2 or 2000 or 20000000000 and run stuff.

With the abstractions being crafted around ML libraries today, the "N" is being prepped for some very large potential values of N.




Deep learning, Doug Englebart and Jimmy Hendrix

Thu, 15 Sep 2016 10:49:00 +0000

The late great Doug Englebart did foundational work in many areas of computing and was particularly interested in the relationship between human intelligence and machine intelligence.

Even a not-so-smart machine can augment human productivity if even simple cognitive tasks can be handled by the machine. Reason being, machines are super fast. Super fast can compensate for "not-so-smart" in many useful domains. Simple totting up of figures, printing lots of copies of a report, shunt lots of data around, whatever.

How do you move a machine from "not-so-smart" to "smarter" for any given problem? The obvious way is to get the humans to do the hard thinking and come up with a smarter way. It is hard work because the humans have to be able to oscillate between smart thinking and thinking like not-so-smart machines because ultimately the smarts have to be fed to the not-so-smart machine in grindingly meticulous instructions written in computer-friendly (read "not-so-smart") programs. Simple language because machines can only grok simple language.

The not-so-obvious approach is to create a feedback loop where the machine can change its behavior over time by feeding outputs back into inputs. How to do that? Well, you got to start somewhere so get the human engineers to create feedback loops and teach them to the computer. You need to do that to get the thing going - to bootstrap it....

then stand back....

Things escalate pretty fast when you create feedback loops! If the result you get is a good one, it is likely to be *a lot* better than your previous best because feedback loops are exponential.

Englebart's insight was to recognize that the intelligent, purposeful creation of feedback loops can be a massive multiplier : both for human intellect at the species level, and at the level of machines. When it works, it can move the state of the art of any problem domain forward, not by a little bit, but by *a lot*.

A human example would be the invention of writing. All of a sudden knowledge could survive through generations and could spread exponentially better than it could by oral transmission.

The hope and expectation around Deep Learning is that it is basically a Doug Englebart Bootstrap for machine intelligence. A smart new feedback loop in which the machines can now do a vital machine intelligence step ("feature identification") that previously required humans. This can/should/will move things forward *a lot* relative to the last big brohuha around machine intelligence in the Eighties.

The debates about whether or not this is really "intelligence" or just "a smarter form of dumb" will rage on in parallel, perhaps forever.

Relevance to Jimmy Hendrix? See https://www.youtube.com/watch?v=JMyoT3kQMTg




The scourge of easily accessible abstraction

Thu, 11 Aug 2016 11:57:00 +0000

In software, we are swimming in abstractions. We also have amazingly abstract tools that greatly enhance our ability to create even more abstractions.

"William of Ockham admonished philosophers to avoid multiplying entities, but computers multiple them faster than his razor can shave." -- John F. Sowa, Knowledge Representation.

Remember that the next time you are de-referencing a URL to get the address of a pointer to a factory that instantiates an instance of meta-class for monad constructors...






Sebastian Rahtz, RIP

Wed, 03 Aug 2016 10:57:00 +0000

It has just now come to my attention that Sebastian Rahtz passed away earler this year.
RIP. Fond memories of conversations on the xml-dev mailing list.

https://en.wikipedia.org/wiki/Sebastian_Rahtz





Software self analysis again

Wed, 20 Jul 2016 14:11:00 +0000

Perhaps a better example for the consciousness post  would have been to allow the application on the operating system to have access to the source code for the hypervisor two levels down. That way, the app could decide to change the virtualization of CPUs or the contention algorithms on the virtualized network interfaces and bootstrap itself a new hypervisor to host its own OS.

The question naturally arises, what scope has an app - or an OS - got for detecting that it is on a hypervisor rather than real hardware? If the emulation is indistinguishable, you cannot tell - by definition. At which point the emulated thing and the thing emulated have become indistinguishable. At which point you have artificially re-created that thing.

This is all well worn territory in the strong Vs weak AI conversations of course.

My favorite way of thinking about it is this:

1 - we don't understand consciousness and thus we cannot be sure we won't re-create it by happenstance, as we muck about with making computers act more intelligently.

2 - If we do create it, we likely won't know how we did it (especially since it is likely to be a gradual, multi-step thing rather than a big-bang thing)

3 - because we won't know what we did to create it, we won't know how to undo it or switch it off

4 - if it improves by iteration and it iterates a lot faster in silicon than we do in carbon, we could find ourselves a distant second in the earth-based intelligence ranking table rather quickly :-)

Best if we run all the electrical generation on the planet with analog methods and vacuum tubes so that we can at least starve it of electricity if push comes to shove:-)




English as an API

Sat, 16 Jul 2016 16:55:00 +0000

Like the last post, this I am filing under "speculative"

Chat bots strip away visual UI elements in favor of natural language in a good old fashioned text box.

Seems kind of retro but, perhaps something deeper is afoot. For the longest time, we have used the phrase "getting applications to talk to each other" as a sort of business-level way of saying, "get applications to understand each others APIs and/or data structures."

Perhaps, natural language - or a controlled version of natural language - will soon become a viable way of getting applications to talk to each other. I.e. chatbots chatting with other chatbots, by sending/receiving English.

One of the big practical upshots of that - if it transpires - is that non-programmers will have a new technique for wiring up disparate applications. I.e. talk to each of them via their chat interface, then gradually get them talking to each other...

Hmmmm.






The surprising role of cloud computing in the understanding of consciousness

Sat, 16 Jul 2016 16:00:00 +0000

I am filing this one under "extremely speculative".

I think it was Douglas Hofstadter's book "I am a strange loop" that first got me thinking about the the possible roles of recursion and self-reference in understanding consciousness.

Today - for no good reason - it occurred to me that if the Radical Plasticity Theory is correct, to emulate/re-create consciousness[1] we need to create the conditions for consciousness to arise. Doing that requires arranging a computing system that can observe every aspect of itself in operation.

For most of the history of computing, we have had a layer of stuff that the software could only be dimly aware of, called the hardware.

With virtualization and cloud computing, more and more of that hardware layer is becoming, itself, software and thus, in principle, open to fine grained examination by the software running on....the software, if you see what I mean.

To take an extreme example, a unix application could today be written that introspects itself, concludes that the kernel scheduler logic should be changed, writes out the modified source code for the new kernel, re-compiles it, boots a Unix OS image based on it, and transplant itself into a process on this new kernel.

Hmmm.

[1] Emulation versus re-creation of consciousness. Not going there.




The subtle complexities of legal/contractual ambiguity

Mon, 20 Jun 2016 13:51:00 +0000

The law is not a set of simple rules and the rule of law is not - and arguably cannot -be reduced to a Turing Machine evaluating some formal expression of said rules.

A theme of mine for some time has been how dangerous it is to junp to conclusions about the extent to which the process of law - and its expression in the laws themselves  - can be looked upon purely in terms of a deductive logic system in disguise.

Laws, contracts etc. often contain ambiguities that are there on purpose. Some are tactical. Some are there in recognition of the reality that concepts like "fairness" and "reasonable efforts" are both useful and unquantifiable.

In short there are tactical, social and deep jurisprudence-related reasons for the presence ambiguity in laws/contracts.

Trying to remove them can lead to unpleasant results.

Case in point : the draining of millions of dollars from the DAO. See writeup on Bloomberg : Ethereum Smart Contracts




25 years if the Internet in Ireland - a personal recollection of the early days

Fri, 17 Jun 2016 13:25:00 +0000

So today is the Internets 25th anniversary in Ireland.
In 1991 I was working with a financial trading company, developing technical analysis software for financial futures traders in 8086 assembly language and C using PCs equipped with TMS34010 graphics boards.

I cannot remember how exactly...possible through the Unix Users Group I ended up getting a 4800 KBS modem connection to a Usenet feed from Trinity via the SLIP protocol.

Every day I would dial up and download comp.text.sgml from Usenet onto my Sun Roadrunner X86 "workstation".

Not long thereafter, Ireland Online happened and I was then dialling up Furbo in the Gaeltacht of Connemara because it was the first access point to the WWW in Ireland.

I ditched my compuserv e-mail account not long after and became digitome@iol.ie on comp.text.sgml

So much has changed since those early days...and yet so much as stayed the same.




From textual authority to interpretive authority: the next big shift in legal and regulatory informatics

Fri, 20 May 2016 09:55:00 +0000

This paper : law and algorithms in the public domain from a journal on applied ethics is representative, I think, of the thought processes going on around the world at present regarding machine intelligence and what it means for law/regulation. It seems to me that there has been a significant uptick in people from diverse science/philosophy backgrounds taking an interest in the world of law. These folks range from epistemologists to bioinformaticians to statisticians to network engineers. Many of them are looking at law/regulation through the eyes of digital computing and asking, basically, "Is law/regulation computation?" and also "If it is not currently computation, can it be? Will it be? Should it be?" These are great, great questions. We have a long way to go yet in answering them. Much of the world of law and the world of IT is separated by a big chasm of mutual mis-understanding at present. Many law folk - with some notable exceptions - do not have a deep grasp of computing and many computing folk - with some notable exceptions - do not have a deep grasp of law. Computers are everywhere in the world of law, but to date, they have primarily been wielded as document management/search&retrieval tools. In this domain, they have been phenomenally successful. To the point where significant textual authority has now transferred to digital modalities from paper. Those books of caselaw and statute and so on, on the shelves, in the offices. They rarely move from the shelves. For much practical, day-to-day activity, the digital instantiations of these legislative artifiacts are normative and considered authoritative by practitioners. How often these days to legal researchers go back to the paper-normative books? Is it even possible anymore in a world where more and more paper publication is being replaced by cradle-to-grave digital media? If the practitioners and the regulators and the courts are all circling around a set of digital artifacts, does it matter any more if the digital artifact is identical to the paper one? Authority is a funny thing. It is mostly a social construct.  I wrote about this some years ago here: Would the real, authentic copy of the document please stand up?  If the majority involved in the world of law/regulation use digital information resource X even though strictly speaking X is a "best efforts facsimile" of paper information resource Y, then X has de-facto authority even though it is not de-jure authoritative. (The fact that de-jure texts are often replaced by de facto texts in the world of jure - law! - is a self-reference that will likely appeal to anyone who has read The Paradox of Self Amendment by Peter Suber. We are very close to being at the point with digital resources in law/regulation have authority for expression but it is a different kettle of fish completely to have expression authority compared to interpretive authority. It is in this chasm between authority of expression and authority of interpretation that most of the mutual misunderstandings between law and computing will sit in the years ahead I think. On one hand, law folk will be too quick to dismiss what the machines can do in the interpretive space and IT people will be too quick to think the machines can quickly take over the interpretive space. The truth - as ever - is somewhere in between. Nobody knows yet where the dividing line is but the IT people are sure to move the line from where it currently is (in legal "expression" space) to a new location (in legal "interpretation" space). The IT people will be asking the hard questions of the world of law going forward. Is this just computing in different clothing? If so, then lets make it a computin[...]



From BSOD to UOD

Mon, 16 May 2016 14:08:00 +0000

I don't get many "Blue Screen of Death" type events these days : In any of the Ubuntu, Android, iOS, Window environments I interact with. Certainly not like the good old days when rebooting every couple of hours felt normal. (I used to keep my foot touching the side of my deskside machine. The vibrations of the hard disk used to be a good indicator of life back in the good old days. Betcha that health monitor wasn't considered in the move to SSDs. Nothing even to listen too these days, never mind touch.)

I do get Updates of Death though - and these are nasty critters!

For example, your machine auto-updates and disables the network connection leaving you unable to get at the fix you just found online....

Grrrrrrrrrrr.



Genetic Football

Mon, 09 May 2016 16:13:00 +0000

Genetic Football is a thing. Wow.

As a thing, it is part of a bigger thing.

That bigger thing seems to be this: given enough cheap compute power, the time taken to perform zillions of iterations can be made largely irrelevant.

Start stupid. Just aim to be fractionally less stupid the next time round, and iterations will do the rest.

The weirdest thing about all of this for me is that if/when iterated algorithmic things start showing smarts, we will know the causal factors that lead to the increased smartness, but not the rationale for any individual incidence of smart-ness.

As a thing, that is part of a bigger thing.

That bigger thing is that these useful-but-unprovable things will be put to use in areas where humankind as previously expected the presence of explanation. You know, rules, reasoning, all that stuff.

As a thing, that is part of a bigger thing.

That bigger thing is that in many areas of human endeavor it is either impossible to get explanations - (i.e. experts who know what to do, but cannot explain why in terms of rules.), or the explanations need to be taken with a pinch of post-hoc-ergo-propter-hoc salt, or a pinch or retroactive goal-setting salt.

As a thing, that is part of a bigger thing.

When the machines come, and start doing clever things but cannot explain why....

...they will be just like us.




Statistics and AI

Thu, 05 May 2016 09:44:00 +0000

We live at a time where there is more interest in AI than ever and it is growing every day.

One of the first things that happens when a genre of computing starts to build up steam is that pre-existing concepts get subsumed into the new genre. Sometimes, the adopted concepts are presented in a way that would suggest they are new concepts, created as part of the new genre. Sometimes they are. But sometimes they are not.

For example, I recently read some material that presented linear regression as a machine learning technique.

Now of course, regression has all sorts of important contributions to make to machine learning but it was invented/discovered long long before the machines came along.





Cutting the inconvenient protrusions from the jigsaw pieces

Thu, 14 Apr 2016 12:34:00 +0000

There is a school of thought that goes like this....

(1) To manage data means to put it in a database
(2) A 'database' means a relational database. No other database approach is really any good.
(3)  If the data does not fit into the relational data model, well just compromise the data so that it does. Why? See item (1).

I have no difficulty whatsover with recommending relational databases where there is a good fit between the data, the problem to be solved, and the relational database paradigm.

Where the fit isn't good, I recommend something else. Maybe index flat files, or versioned spreadsheets, documents, a temporal data store....whatever feels least like I am cutting important protrusions off the data and off the problem to be solved.

However, whenever I do that, I am sure to have to answer the "Why not just store it in [Insert RDB name]?" question.

It is an incredibly strong meme in modern computing.



Algorithms where human understanding is optional - or maybe even impossible

Mon, 14 Mar 2016 10:52:00 +0000

I think I am guilty of holding on to an AI non-sequitur for a long time. Namely the idea that AI is fundamentally limited by our ability as humans to code the rules for the computer to execute. If we humans cannot write down the rules for X, we cannot get the computer to do X.

Modern AI seems to have significantly lurched over to the "no rules" side of the field where phrases like CBR (case based reasoning) and Neural Net Training Sets abound...

But with an interesting twist that I have only recently become aware of. Namely, using bootstrapping to use generation X of an AI system to produce generation X+1.

The technical write-ups about the recent stunning AlphaGo victory make reference to the boostrapping of AlphaGo. As well as learning from the database of prior human games, it has learned by playing against itself....

Doug Englebart springs to mind and his bootstrapping strategy.

Douglass Hofstadter springs to mind and his strange loops model of consciousness.

Stephen Wolfram springs to mind and his feedback loops of simple algorithms for rapidly generating complexity.

AI's learning by using the behavior of the previous generation AI as "input" in the form of a training set sounds very like iterating a simple Wolfram algorithm or a fractal generating function, except that the output of each "run", is the algorithm for the next run.

The weird, weird, weird thing about all of this, is that we humans don't have to understand the AIs we are creating. We are just creating the environment in which they can create themselves.

In fact, it may even be the case that we cannot understand them because, by design, there are no rules in there to be dug out and understood. Just an unfathomably large state space of behaviors.

I need to go to a Chinese room, and think this through...




LoRa

Thu, 10 Mar 2016 16:44:00 +0000

LoRa feels like a big deal to me. In general, hardware-lead innovations tend to jumpstart software design into interesting places, moreso than software-lead innovations drag hardware design into interesting places.

With software driving hardware innovation, the results tend to be of the bigger, faster, cheaper variety. All good things but not this-changes-everything type moments.

With hardware driving software innovation however, software game changers seem to come along sometimes.

Telephone exchanges -> Erlang -> Elixer.
Packet switching -> TCP/IP -> Sockets

BGP Routers -> Multihoming
VR Headsets -> Immersive 3D worlds

etc.

I have noticed that things tend to come full circle though. Sooner or later, the any hardware bits that can themselves be replaced by software bits, are replaced:-)

This loopback trend is kicking into a higher gear at the moment because of 3D printing. I.e. a hardware device is conceived of. In order to build the device, the device is simulated in software to drive the 3D printer.  Any such devices that *could* remain purely software, do so eventually.

A good example is audio recording. A modern DAW like ProTools or Reaper now provides pure digital emulators for pretty much any piece of audio hardware kit you can think of: EQs, pre-amps, compressors, reverbs etc.




XML and St Patrick

Fri, 04 Mar 2016 10:27:00 +0000

I am finding it a bit hard to believe that I wrote this *fourteen* years ago.

Patrick to be Named Patron Saint of Software Developers
In a dramatic development, scholars working in Newgrange, Ireland, have deciphered an Ogham stone thought to have been carved by St. Patrick himself. The text on the stone predicts, with incredible accuracy, the trials-and-tribulations of IT professionals in the early 21st century. Calls are mounting for St. Patrick to be named the patron saint of Markup Technologists.

The full transcription of the Ogham stone is presented here for the first time:

DeXiderata

    Go placidly amid the noise and haste and remember what peace there may be in silence.

    As far as possible, without surrender, accommodate the bizarre tag names and strange attribute naming conventions of others.

    Speak your truth quietly and clearly, making liberal use of UML diagrams. Listen to others, even the dull and ignorant, they too have their story and won't shut up until you have heard it.

    Avoid loud style sheets and aggressive time scales, they are vexations to the spirit. If you compare your schemas with others, you will become vain and bitter for there will always be schemas greater and lesser than yours -- even if yours are auto-generated.

    Enjoy the systems you ship as well as your plans for new ones. Keep interested in your own career, however humble. It's a real possession in the changing fortunes of time and Cobol may yet make a comeback.

    Exercise caution in your use of namespaces for the world is full of namespace semantic trickery. Let this not blind you to what virtue there is in namespace-free markup. Many applications live quite happily without them.

    Be yourself. Especially do not feign a working knowledge of RDF where no such knowledge exists. Neither be cynical about Relax NG; for in the face of all aridity and disenchantment in the world of markup, James Clark is as perennial as the grass.

    Take kindly the counsel of the years, gracefully surrendering the things of youth such as control over the authoring subsystems and any notion that you can dictate a directory structure for use by others.

    Nurture strength of spirit to nourish you in sudden misfortune but do not distress yourself with dark imaginings of wholesale code re-writes.

    Many fears are born of fatigue and loneliness. If you cannot make that XML document parse, go get a pizza and come back to it.

    Beyond a wholesome discipline, be gentle with yourself. Loosen your content models to help your code on its way, your boss will probably never notice.

    You are a child of the universe no less than the trees and all other acyclic graphs; you have a right to be here. And whether or not it is clear to you, no doubt the universe is unfolding as it should.

    Therefore be at peace with your code, however knotted it may be. And whatever your labors and aspirations, in the noisy confusion of life, keep peace with your shelf of manuals. With all its sham, drudgery, and broken dreams, software development is a pretty cool thing to do with your head. Be cheerful. Strive to be happy.



Software complexity accelerators

Fri, 26 Feb 2016 10:15:00 +0000

It seems to me that complexity in software development, although terribly hard to measure, has steadily risen from the days of Algol 68 and continues to rise.

In response to the rise, we have developed mechanisms for managing - not removing - managing the complexity.

These management - or perhaps I should say 'containment' mechanisms have an interesting negative externality. If a complexity level of X was hard to contain before, but thanks to paradigm Y is not contained, the immediate side-effect is an increase in the value of X:-)

It reminds me of an analysis I found somewhere about driving speed and seat belts. Apparently, steat belts can have the effect of increasing driving speed. Reason being, we all have a risk level we sub-consciously apply when driving. Putting on a seat belt can make us feel that a higher speed is now possible without increasing our risk level.

So what sort of "seat belts" have we added into software development recently? I think Google Search is a huge one. Rather than reduce the complexity of an application as evidenced by the amount of debugging/head-scratching you need to do, we have accelerated the process of finding fixes online.

Another one is open source. We can now leverage a world-wide hive-mind that collectively "wraps its head around" a code-base so that code-base can become more complex than it could if a finite team work the code-base.

Another one is cloud. Client/Server-style computing models push most of the complexity of management into the server side. Applications that would be incredibly complex to manage in todays diverse OS world if they were thick-clients are easier to manage server-side, thus creating headroom for new complexity which, sure enough gets added to the mix.

Is this phenomenon of complexity acceleration thanks to better and better complexity containment a bad thing?

I honestly don't know.




It's obvious really

Fri, 19 Feb 2016 10:22:00 +0000

Nothing is more deserving of questioning, than an obvious conclusion.



Fixity, Vellums and the curious case of the rotting bits

Thu, 11 Feb 2016 16:07:00 +0000

So, vellum may be on the way out in the UK Parliament Many deep and thorny issues here. Thought experiment: In your hand you have a 40 page document. On your computer screen you have an electronic document open in a word processor. You have been told they they are "the same document". How can you tell? What does it even mean to say that they are the "same"? Does it matter if there is no sure-fire way to prove it? Let us start at the end of that list of questions and work backwards. Does it matter that there is no sure-fire way to prove it? Most of the time, it does not matter if you cannot prove they are the same. Over the years since the computerization of documents, we have devised various techniques for managing the risks of differences arising between what the computer says and what the sheets of paper say. However, when it does matter it tends to matter a whole bunch. Examples are domains such as legal documents, mission critical procedure manuals, that sort of thing.A very common way of mitigating the risk of differences arising between paper and electronic texts is to declare the electronic version to be the real, authentic document and treat the paper as a "best efforts" copy or rendering of the authentic document. If the printing messes up and some text gets chopped off the right hand margin we think "No big deal". Annoying but not cataclysmic. The electronic copy is the real one and we can just go back to the source any time we want......Yes, as long as the electronic source is not, itself, an ambiguous idea. Again, we have developed practices to mitigate this risk. If I author a document in, say, FrameMaker but export RTF to send to you, the FrameMaker is considered the real, authentic electronic file. If anything happens to the RTF content - either as it is exported, transmitted or imported by you into some other application - we refer back to the original electronic file which is the FrameMaker incarnation.......If we still have it up to date. The problem is that we do not print FrameMaker or Word or Quark Express. We tend to print "frozen" renderings of these things. Things like postscript and PDF. On the way to paper, it is not uncommon for fixes to be required just prior to the creation of very expensive printing plates. If something small needs to be fixed, it will probably get fixed at 2 a.m. in the postscript or PDF file...which is now out of sync with the original FrameMaker file......Which, come to think of it, might not have been as clear cut an authoritative source as I made it out to be. It is not uncommon for applications like FrameMaker, Adobe CS2, Quark etc. to be used downstream of an authoring process that utilizes Microsoft Word or Corel Wordperfect or OpenOffice or some Webb-y browser plug-in.If (i.e. when) errors are found in document proofs the upstream documents should really be fixed and the DTP versions re-constituted. Otherwise, the source documents get out of sync with the paper copy very quickly indeed. Worse, the differences between the source documents and the paper copy may be in the form of small errors. A period missing here, a dollar sign there...Small enough to be very hard to spot with proofreading but large enough to be very serious in for example, legal documents.What to do? Well, we need to freeze-dry "cuts" of these documents to remove all ambiguity and then institute rigorous policies and procedures to ensure that changes are properly reflected every[...]



The biggest IT changes in the last 5 years: The re-emergence of data flow design

Fri, 05 Feb 2016 10:41:00 +0000

My first exposure to data flow as an IT design paradigm came around 1983/4 in the form of Myers and Constantine's work on "Structured Design" which dates from 1974.


I remember at the time finding the idea really appealing but yet, the forces at work in the industry and in academic research pulled mainstream IT design towards non-flow-centric paradigms. Examples include Stepwise Decomposition/Structured Programming (e.g Dijkstra), Object Oriented Design e.g. (Booch),  Relational Data modelling (e.g. Codd).



Over the years, I have seen pockets of mainstream IT design terms emerging that have data flow-like ideas in them. Some recent relevant terms would be Complex Event Processing and stream processing.

Many key dataflow ideas are built into Unix. Yet creating designs leveraging line-oriented data formats, piped through software components, local-and-remote, everything from good old 'cat' to GNU Parallels and everything in between, has never, to my knowledge, been given a design name reflective of just how incredibly powerful and commonplace it is.

Things are changing I believe, thanks to cloud computing and multi-core parallel computing in general. Amazon AWS pipeline, Google Dataflow, Google Tensorflow are good examples. Also, bubbling away under the radar are things like FBP (Flow Based Programming), buzz around Elixer and similar such as shared-nothing architectures.

A single phrase is likely to emerge soon I think. Many "grey beards" from JSD (Jackson Stuctured Design), to IBM MQSeries (asynch messaging), to Ericsson's AXE-10 Erlang engineers, to Unix pipeline fans, will do some head-scratching of the "Hey, we were doing this 30 years ago!" variety.

So it goes.

Personally, I am very excited to see dataflow re-emerge to mainstream. I naturally lean towards thinking in terms of dataflow anyway. I can only benefit from all the cool new tools/techniques that come with mainstreaming of any IT concept.