Subscribe: nomeata’s mind shares - English
Added By: Feedage Forager Feedage Grade B rated
Language: English
analyseexpr  case  code  codeworld  coq  flag  forall  function  ghc  good  haskell  isabelle  nat  proof  rules   
Rate this Feed
Rating: 3 starRating: 3 starRating: 3 starRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: nomeata’s mind shares - English

nomeata’s mind shares

Joachim Breitners Denkblogade


Interleaving normalizing reduction strategies

Thu, 15 Feb 2018 14:17:58 -0500

A little, not very significant, observation about lambda calculus and reduction strategies.

A reduction strategy determines, for every lambda term with redexes left, which redex to reduce next. A reduction strategy is normalizing if this procedure terminates for every lambda term that has a normal form.

A fun fact is: If you have two normalizing reduction strategies s1 and s2, consulting them alternately may not yield a normalizing strategy.

Here is an example. Consider the lambda-term o = (λ, and note that oo → ooo → oooo → …. Let Mi = (λx.(λx.x))(oooo) (with i ocurrences of o). Mi has two redexes, and reduces to either (λx.x) or Mi + 1. In particular, Mi has a normal form.

The two reduction strategies are:

  • s1, which picks the second redex if given Mi for an even i, and the first (left-most) redex otherwise.
  • s2, which picks the second redex if given Mi for an odd i, and the first (left-most) redex otherwise.

Both stratgies are normalizing: If during a reduction we come across Mi, then the reduction terminates in one or two steps; otherwise we are just doing left-most reduction, which is known to be normalizing.

But if we alternatingly consult s1 and s2 while trying to reduce M2, we get the sequence

M2 → M3 → M4 → …

which shows that this strategy is not normalizing.

Afterthought: The interleaved strategy is not actually a reduction strategy in the usual definition, as it not a pure (stateless) function from lambda term to redex.

The magic “Just do it” type class

Fri, 02 Feb 2018 14:01:11 -0500

One of the great strengths of strongly typed functional programming is that it allows type driven development. When I have some non-trivial function to write, I first write its type signature, and then the writing the implementation often very obvious. Once more, I am feeling silly In fact, it often is completely mechanical. Consider the following function: foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b)) This is somewhat like the bind for a combination of the error monad and the reader monad, and remembers the intermediate result, but that doesn’t really matter now. What matters is that once I wrote that type signature, I feel silly having to also write the code, because there isn’t really anything interesting about that. Instead, I’d like to tell the compiler to just do it for me! I want to be able to write foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b)) foo = justDoIt And now I can! Assuming I am using GHC HEAD (or eventually GHC 8.6), I can run cabal install ghc-justdoit, and then the following code actually works: {-# OPTIONS_GHC -fplugin=GHC.JustDoIt.Plugin #-} import GHC.JustDoIt foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b)) foo = justDoIt What is this justDoIt? *GHC.LJT GHC.JustDoIt> :browse GHC.JustDoIt class JustDoIt a justDoIt :: JustDoIt a => a (…) :: JustDoIt a => a Note that there are no instances for the JustDoIt class -- they are created, on the fly, by the GHC plugin GHC.JustDoIt.Plugin. During type-checking, it looks as these JustDoIt t constraints and tries to construct a term of type t. It is based on Dyckhoff’s LJT proof search in intuitionistic propositional calculus, which I have implemented to work directly on GHC’s types and terms (and I find it pretty slick). Those who like Unicode can write (…) instead. What is supported right now? Because I am working directly in GHC’s representation, it is pretty easy to support user-defined data types and newtypes. So it works just as well for data Result a b = Failure a | Success b newtype ErrRead r e a = ErrRead { unErrRead :: r -> Result e a } foo2 :: ErrRead r e a -> (a -> ErrRead r e b) -> ErrRead r e (a,b) foo2 = (…) It doesn’t infer coercions or type arguments or any of that fancy stuff, and carefully steps around anything that looks like it might be recursive. How do I know that it creates a sensible implementation? You can check the generated Core using -ddump-simpl of course. But it is much more convenient to use inspection-testing to test such things, as I am doing in the Demo file, which you can skim to see a few more examples of justDoIt in action. I very much enjoyed reaping the benefits of the work I put into inspection-testing, as this is so much more convenient than manually checking the output. Is this for real? Should I use it? Of course you are welcome to play around with it, and it will not launch any missiles, but at the moment, I consider this a prototype that I created for two purposes: To demonstrates that you can use type checker plugins for program synthesis. Depending on what you need, this might allow you to provide a smoother user experience than the alternatives, which are: Preprocessors Template Haskell Generic programming together with type-level computation (e.g. generic-lens) GHC Core-to-Core plugins In order to make this viable, I slightly changed the API for type checker plugins, which are now free to produce arbitrary Core terms as they solve constraints. To advertise the idea of taking type-driven computation to its logical conclusion and free users from having to implement functions that they have already specified sufficiently precisely by their type. What needs to happen for this to become real? A bunch of things: The LJT implementation is somewhat neat, but I probably did not implement backtracking properly, and there might be more bugs. The implementation is very much unoptimized. For this to be practically useful, the user needs to be able to use it with confidence. In par[...]

Finding bugs in Haskell code by proving it

Tue, 05 Dec 2017 09:17:43 -0500

Last week, I wrote a small nifty tool called bisect-binary, which semi-automates answering the question “To what extent can I fill this file up with zeroes and still have it working”. I wrote it it in Haskell, and part of the Haskell code, in the Intervals.hs module, is a data structure for “subsets of a file” represented as a sorted list of intervals: data Interval = I { from :: Offset, to :: Offset } newtype Intervals = Intervals [Interval] The code is the kind of Haskell code that I like to write: A small local recursive function, a few guards to case analysis, and I am done: intersect :: Intervals -> Intervals -> Intervals intersect (Intervals is1) (Intervals is2) = Intervals $ go is1 is2 where go _ [] = [] go [] _ = [] go (i1:is1) (i2:is2) -- reorder for symmetry | to i1 < to i2 = go (i2:is2) (i1:is1) -- disjoint | from i1 >= to i2 = go (i1:is1) is2 -- subset | to i1 == to i2 = I f' (to i2) : go is1 is2 -- overlapping | otherwise = I f' (to i2) : go (i1 { from = to i2} : is1) is2 where f' = max (from i1) (from i2) But clearly, the code is already complicated enough so that it is easy to make a mistake. I could have put in some QuickCheck properties to test the code, I was in proving mood... Now available: Formal Verification for Haskell Ten months ago I complained that there was no good way to verify Haskell code (and created the nifty hack ghc-proofs). But things have changed since then, as a group at UPenn (mostly Antal Spector-Zabusky, Stephanie Weirich and myself) has created hs-to-coq: a translator from Haskell to the theorem prover Coq. We have used hs-to-coq on various examples, as described in our CPP'18 paper, but it is high-time to use it for real. The easiest way to use hs-to-coq at the moment is to clone the repository, copy one of the example directories (e.g. examples/successors), place the Haskell file to be verified there and put the right module name into the Makefile. I also commented out parts of the Haskell file that would drag in non-base dependencies. Massaging the translation Often, hs-to-coq translates Haskell code without a hitch, but sometimes, a bit of help is needed. In this case, I had to specify three so-called edits: The Haskell code uses Intervals both as a name for a type and for a value (the constructor). This is fine in Haskell, which has separate value and type namespaces, but not for Coq. The line rename value Intervals.Intervals = ival changes the constructor name to ival. I use the Int64 type in the Haskell code. The Coq version of Haskell’s base library that comes with hs-to-coq does not support that yet, so I change that via rename type GHC.Int.Int64 = GHC.Num.Int to the normal Int type, which itself is mapped to Coq’s Z type. This is not a perfect fit, and my verification would not catch problems that arise due to the boundedness of Int64. Since none of my code does arithmetic, only comparisons, I am fine with that. The biggest hurdle is the recursion of the local go functions. Coq requires all recursive functions to be obviously (i.e. structurally) terminating, and the go above is not. For example, in the first case, the arguments to go are simply swapped. It is very much not obvious why this is not an infinite loop. I can specify a termination measure, i.e. a function that takes the arguments xs and ys and returns a “size” of type nat that decreases in every call: Add the lengths of xs and ys, multiply by two and add one if the the first interval in xs ends before the first interval in ys. If the problematic function were a top-level function I could tell hs-to-coq about this termination measure and it would use this information to define the function using Program Fixpoint. Unfortunately, go is a local function, so this mechanism is not available to us. If I care more about the verification than about preserving the exact Haskell code, I could easily change the Haskell code to make go a top-level function, but in th[...]

Existence and Termination

Sat, 25 Nov 2017 15:54:57 -0500

I recently had some intense discussions that revolved around issues of existence and termination of functions in Coq, about axioms and what certain proofs actually mean. We came across some interesting questions and thoughts that I’ll share with those of my blog readers with an interest in proofs and interactive theorem proving. tl;dr It can be meaningful to assume the existence of a function in Coq, and under that assumption prove its termination and other properties. Axioms and assumptions are logically equivalent. Unsound axioms do not necessary invalidate a theory development, when additional meta-rules govern their use. Preparation Our main running example is the infamous Collatz series. Starting at any natural number, the next is calculated as follow: Require Import Coq.Arith.Arith. Definition next (n : nat) :nat := if Nat.even n then n / 2 else 3*n + 1. If you start with some positive number, you are going to end up reaching 1 eventually. Or are you? So far nobody has found a number where that does not happen, but we also do not have a proof that it never happens. It is one of the great mysteries of Mathematics, and if you can solve it, you’ll be famous. A failed definition But assume we had an idea on how to prove that we are always going to reach 1, and tried to formalize this in Coq. One attempt might be to write Fixpoint good (n : nat) : bool := if n <=? 1 then true else good (next n). Theorem collatz: forall n, good n = true. Proof. (* Insert genius idea here.*) Qed. Unfortunately, this does not work: Coq rejects this recursive definition of the function good, because it does not see how that is a terminating function, and Coq requires all such recursive function definitions to be obviously terminating – without this check there would be a risk of Coq’s type checking becoming incomplete or its logic being unsound. The idiomatic way to avoid this problem is to state good as an inductive predicate... but let me explore another idea here. Working with assumptions What happens if we just assume that the function good, described above, exists, and then perform our proof: Theorem collatz (good : nat -> bool) (good_eq : forall n, good n = if n <=? 1 then true else good (next n)) : forall n, good n = true. Proof. (* Insert genius idea here.*) Qed. Would we accept this as a proof of Collatz’ conjecture? Or did we just assume what we want to prove, in which case the theorem is vacuously true, but we just performed useless circular reasoning? Upon close inspection, we find that the assumptions of the theorem (good and good_eq) are certainly satisfiable: Definition trivial (n: nat) : bool := true. Lemma trivial_eq: forall n, trivial n = if n <=? 1 then true else trivial (next n). Proof. intro; case (n <=? 1); reflexivity. Qed. Lemma collatz_trivial: forall n, trivial n = true. Proof. apply (collatz trivial trivial_eq). Qed. So clearly there exists a function of type nat -> bool that satisfies the assumed equation. This is good, because it means that the collatz theorem is not simply assuming False! Some (including me) might already be happy with this theorem and proof, as it clearly states: “Every function that follows the Collatz series eventually reaches 1”. Others might still not be at ease with such a proof. Above we have seen that we cannot define the real collatz series in Coq. How can the collatz theorem say something that is not definable? Classical reasoning One possible way of getting some assurance it to define good as a classical function. The logic of Coq can be extended with the law of the excluded middle without making it inconsistent, and with that axiom, we can define a version of good that is pretty convincing (sorry for the slightly messy proof): Require Import Coq.Logic.ClassicalDescription. Require Import Omega. Definition classical_good (n:nat) : bool := if excluded_middle_informative (exists m, Nat.iter m next n <= 1) then true else false. Lemma iter_s[...]

Isabelle functions: Always total, sometimes undefined

Thu, 12 Oct 2017 13:54:20 -0400

Often, when I mention how things work in the interactive theorem prover [Isabelle/HOL] (in the following just “Isabelle”1) to people with a strong background in functional programming (whether that means Haskell or Coq or something else), I cause confusion, especially around the issue of what is a function, are function total and what is the business with undefined. In this blog post, I want to explain some these issues, aimed at functional programmers or type theoreticians. Note that this is not meant to be a tutorial; I will not explain how to do these things, and will focus on what they mean. HOL is a logic of total functions If I have a Isabelle function f :: a ⇒ b between two types a and b (the function arrow in Isabelle is ⇒, not →), then – by definition of what it means to be a function in HOL – whenever I have a value x :: a, then the expression f x (i.e. f applied to x) is a value of type b. Therefore, and without exception, every Isabelle function is total. In particular, it cannot be that f x does not exist for some x :: a. This is a first difference from Haskell, which does have partial functions like spin :: Maybe Integer -> Bool spin (Just n) = spin (Just (n+1)) Here, neither the expression spin Nothing nor the expression spin (Just 42) produce a value of type Bool: The former raises an exception (“incomplete pattern match”), the latter does not terminate. Confusingly, though, both expressions have type Bool. Because every function is total, this confusion cannot arise in Isabelle: If an expression e has type t, then it is a value of type t. This trait is shared with other total systems, including Coq. Did you notice the emphasis I put on the word “is” here, and how I deliberately did not write “evaluates to” or “returns”? This is because of another big source for confusion: Isabelle functions do not compute We (i.e., functional programmers) stole the word “function” from mathematics and repurposed it2. But the word “function”, in the context of Isabelle, refers to the mathematical concept of a function, and it helps to keep that in mind. What is the difference? A function a → b in functional programming is an algorithm that, given a value of type a, calculates (returns, evaluates to) a value of type b. A function a ⇒ b in math (or Isabelle) associates with each value of type a a value of type b. For example, the following is a perfectly valid function definition in math (and HOL), but could not be a function in the programming sense: definition foo :: "(nat ⇒ real) ⇒ real" where "foo seq = (if convergent seq then lim seq else 0)" This assigns a real number to every sequence, but it does not compute it in any useful sense. From this it follows that Isabelle functions are specified, not defined Consider this function definition: fun plus :: "nat ⇒ nat ⇒ nat" where "plus 0 m = m" | "plus (Suc n) m = Suc (plus n m)" To a functional programmer, this reads plus is a function that analyses its first argument. If that is 0, then it returns the second argument. Otherwise, it calls itself with the predecessor of the first argument and increases the result by one. which is clearly a description of a computation. But to Isabelle, the above reads plus is a binary function on natural numbers, and it satisfies the following two equations: … And in fact, it is not so much Isabelle that reads it this way, but rather the fun command, which is external to the Isabelle logic. The fun command analyses the given equations, constructs a non-recursive definition of plus under the hood, passes that to Isabelle and then proves that the given equations hold for plus. One interesting consequence of this is that different specifications can lead to the same functions. In fact, if we would define plus' by recursing on the second argument, we’d obtain the the same function (i.e. plus = plus' is a theorem, and there would be no way of telling the two apart)[...]

e.g. in TeX

Sun, 08 Oct 2017 15:08:13 -0400

When I learned TeX, I was told to not write e.g. something, because TeX would think the period after the “g” ends a sentence, and introduce a wider, inter-sentence space. Instead, I was to write e.g.\␣.

Years later, I learned from a convincing, but since forgotten source, that in fact e.g.\@ is the proper thing to write. I vaguely remembering that e.g.\␣ supposedly affected the inter-word space in some unwanted way. So I did that for many years.

Until I recently was called out for doing it wrong, and that infact e.g.\␣ is the proper way. This was supported by a StackExchange answer written by a LaTeX authority and backed by a reference to documentation. The same question has, however, another answer by another TeX authority, backed by an analysis of the implementation, which concludes that e.g.\@ is proper.

What now? I guess I just have to find it out myself.


The problem and two solutions

The above image shows three variants: The obviously broken version with e.g., and the two contesting variants to fix it. Looks like they yield equal results!

So maybe the difference lies in how \@ and \␣ react when the line length changes, and the word wrapping require differences in the inter-word spacing. Will there be differences? Let’s see;


Expanding whitespace, take 1


Expanding whitespace, take 2

I cannot see any difference. But the inter-sentence whitespace ate most of the expansion. Is there a difference visible if we have only inter-word spacing in the line?


Expanding whitespace, take 3


Expanding whitespace, take 4

Again, I see the same behaviour.

Conclusion: It does not matter, but e.g.\␣ is less hassle when using lhs2tex than e.g.\@ (which has to be escaped as e.g.\@@), so the winner is e.g.\␣!

(Unless you put it in a macro, then \@ might be preferable, and it is still needed between a captial letter and a sentence period.)

Less parentheses

Sun, 10 Sep 2017 11:10:16 +0100

Yesterday, at the Haskell Implementers Workshop 2017 in Oxford, I gave a lightning talk titled ”syntactic musings”, where I presented three possibly useful syntactic features that one might want to add to a language like Haskell. The talked caused quite some heated discussions, and since the Internet likes heated discussion, I will happily share these ideas with you Context aka. Sections This is probably the most relevant of the three proposals. Consider a bunch of related functions, say analyseExpr and analyseAlt, like these: analyseExpr :: Expr -> Expr analyseExpr (Var v) = change v analyseExpr (App e1 e2) = App (analyseExpr e1) (analyseExpr e2) analyseExpr (Lam v e) = Lam v (analyseExpr flag e) analyseExpr (Case scrut alts) = Case (analyseExpr scrut) (analyseAlt <$> alts) analyseAlt :: Alt -> Alt analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e) You have written them, but now you notice that you need to make them configurable, e.g. to do different things in the Var case. You thus add a parameter to all these functions, and hence an argument to every call: type Flag = Bool analyseExpr :: Flag -> Expr -> Expr analyseExpr flag (Var v) = if flag then change1 v else change2 v analyseExpr flag (App e1 e2) = App (analyseExpr flag e1) (analyseExpr flag e2) analyseExpr flag (Lam v e) = Lam v (analyseExpr (not flag) e) analyseExpr flag (Case scrut alts) = Case (analyseExpr flag scrut) (analyseAlt flag <$> alts) analyseAlt :: Flag -> Alt -> Alt analyseAlt flag (dc, pats, e) = (dc, pats, analyseExpr flag e) I find this code problematic. The intention was: “flag is a parameter that an external caller can use to change the behaviour of this code, but when reading and reasoning about this code, flag should be considered constant.” But this intention is neither easily visible nor enforced. And in fact, in the above code, flag does “change”, as analyseExpr passes something else in the Lam case. The idiom is indistinguishable from the environment idiom, where a locally changing environment (such as “variables in scope”) is passed around. So we are facing exactly the same problem as when reasoning about a loop in an imperative program with mutable variables. And we (pure functional programmers) should know better: We cherish immutability! We want to bind our variables once and have them scope over everything we need to scope over! The solution I’d like to see in Haskell is common in other languages (Gallina, Idris, Agda, Isar), and this is what it would look like here: type Flag = Bool section (flag :: Flag) where analyseExpr :: Expr -> Expr analyseExpr (Var v) = if flag then change1 v else change2v analyseExpr (App e1 e2) = App (analyseExpr e1) (analyseExpr e2) analyseExpr (Lam v e) = Lam v (analyseExpr e) analyseExpr (Case scrut alts) = Case (analyseExpr scrut) (analyseAlt <$> alts) analyseAlt :: Alt -> Alt analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e) Now the intention is clear: Within a clearly marked block, flag is fixed and when reasoning about this code I do not have to worry that it might change. Either all variables will be passed to change1, or all to change2. An important distinction! Therefore, inside the section, the type of analyseExpr does not mention Flag, whereas outside its type is Flag -> Expr -> Expr. This is a bit unusual, but not completely: You see precisely the same effect in a class declaration, where the type signature of the methods do not mention the class constraint, but outside the declaration they do. Note that idioms like implicit parameters or the Reader monad do not give the guarantee that the parameter is (locally) constant. More details can be found in the GHC proposal that I prepared, and I invite you to raise concern or voice support there. Curiously, this problem must have bothered me for longer than I remember: I discovered that seven years ago, I wrote a Template H[...]

Compose Conference talk video online

Sun, 20 Aug 2017 20:50:10 +0200

Three months ago, I gave a talk at the Compose::Conference in New York about how Chris Smith and I added the ability to create networked multi-user programs to the educational Haskell programming environment CodeWorld, and finally the recording of the talk is available on YouTube (and is being discussed on reddit):

src="" width="640" height="360" frameborder="0" style="display: block; margin-left: auto; margin-right: auto">

It was the talk where I got the most positive feedback afterwards, and I think this is partly due to how I created the presentation: Instead of showing static slides, I programmed the complete visual display from scratch as an “interaction” within the CodeWorld environment, including all transitions, an working embedded game of Pong and a simulated multi-player environments with adjustable message delays. I have put the code for the presentation online.

Chris and I have written about this for ICFP'17, and thanks to open access I can actually share the paper freely with you and under a CC license. If you come to Oxford you can see me perform a shorter version of this talk again.

Communication Failure

Sun, 06 Aug 2017 11:14:05 -0400

I am still far from being a professor, but I recently got a glimpse of what awaits you in that role… From: Sebastian R. <…> To: Subject: re: Errors I've spotted a basic error in your course on Haskell ( Before I proceed, it's cool if you're not receptive to errors being indicated; I've come across a number of professors who would rather take offense than admit we're all human and thus capable of making mistakes... My goal is to find a resource that might be useful well into the future, and a good indicator of that is how responsive the author is to change. In your introduction note you have written: n contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library. Howeverm there is no input CodeWorld in the code above. Have you been made aware of this error earlier? Regards, ... Nice. I like when people learn from my lectures. The introduction is a bit werid, but ok, maybe this guy had some bad experiences. Strangley, I don’t see a mistake in the material, so I respond: From: Joachim Breitner To: Sebastian R. <…> Subject: Re: Errors Dear Sebastian, thanks for pointing out errors. But the first piece of code under “Basic Haskell” starts with {-# LANGUAGE OverloadedStrings #-} import CodeWorld so I am not sure what you are referring to. Note that these are lecture notes, so you have to imagine a lecturer editing code live on stage along with it. If you only have the notes, you might have to infer a few things. Regards, Joachim A while later, I receive this response: From: Sebastian R. <…> To: Joachim Breitner Subject: Re: Errors Greetings, Joachim. Kindly open the lecture slides and search for "input CodeWorld" to find the error; it is not in the code, but in the paragraph that implicitly refers back to the code. You might note that I quoted this precisely from the lectures... and so I repeat myself... this came from your lectures; they're not my words! In contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library. This time around, I've highlighted the issue. I hope that made it easier for you to spot... Nonetheless, I got my answer. Don't reply if you're going to fight tooth and nail about such a basic fix; it's simply a waste of both of our time. I'd rather learn from somewhere else... On Tue, Aug 1, 2017 at 11:19 PM, Joachim Breitner wrote: … I am a bit reminded of Sean Spicer … “they’re not my words!” … but clearly I am missing something. And indeed I am: In the code snippet, I wrote – correctly – import CodeWorld, but in the text I had input CodeWorld. I probably did write LaTeX before writing the lecture notes. Well, glad to have that sorted out. I fixed the mistake and wrote back: From: Joachim Breitner To: Sebastian R. <…> Betreff: Re: Errors Dear Sebastian, nobody is fighting, and I see the mistake now: The problem is not that the line is not in the code, the problem is that there is a typo in the line and I wrote “input” instead of “import”. Thanks for the report, although you did turn it into qu[...]

How is coinduction the dual of induction?

Thu, 27 Jul 2017 22:05:32 -0400

Earlier today, I demonstrated how to work with coinduction in the theorem provers Isabelle, Coq and Agda, with a very simple example. This reminded me of a discussion I had in Karlsruhe with my then colleague Denis Lohner: If coinduction is the dual of induction, why do the induction principles look so different? I like what we observed there, so I’d like to share this. The following is mostly based on my naive understanding of coinduction based on what I observe in the implementation in Isabelle. I am sure that a different, more categorial presentation of datatypes (as initial resp. terminal objects in some category of algebras) makes the duality more obvious, but that does not necessarily help the working Isabelle user who wants to make sense of coninduction. Inductive lists I will use the usual polymorphic list data type as an example. So on the one hand, we have normal, finite inductive lists: datatype 'a list = nil | cons (hd : 'a) (tl : "'a list") with the well-known induction principle that many of my readers know by heart (syntax slightly un-isabellized): P nil → (∀x xs. P xs → P (cons x xs)) → ∀ xs. P xs Coinductive lists In contrast, if we define our lists coinductively to get possibly infinite, Haskell-style lists, by writing codatatype 'a llist = lnil | lcons (hd : 'a) (tl : "'a llist") we get the following coinduction principle: (∀ xs ys. R xs ys' → (xs = lnil) = (ys = lnil) ∧ (xs ≠ lnil ⟶ ys' ≠ lnil ⟶ hd xs = hd ys ∧ R (tl xs) (tl ys))) → → (∀ xs ys. R xs ys → xs = ys) This is less scary that it looks at first. It tell you “if you give me a relation R between lists which implies that either both lists are empty or both lists are nonempty, and furthermore if both are non-empty, that they have the same head and tails related by R, then any two lists related by R are actually equal.” If you think of the infinte list as a series of states of a computer program, then this is nothing else than a bisimulation. So we have two proof principles, both of which make intuitive sense. But how are they related? They look very different! In one, we have a predicate P, in the other a relation R, to point out just one difference. Relation induction To see how they are dual to each other, we have to recognize that both these theorems are actually specializations of a more general (co)induction principle. The datatype declaration automatically creates a relator: rel_list :: ('a → 'b → bool) → 'a list → 'b list → bool The definition of rel_list R xs ys is that xs and ys have the same shape (i.e. length), and that the corresponding elements are pairwise related by R. You might have defined this relation yourself at some time, and if so, you probably introduced it as an inductive predicate. So it is not surprising that the following induction principle characterizes this relation: Q nil nil → (∀x xs y ys. R x y → Q xs ys → Q (cons x xs) (cons y ys)) → (∀xs ys → rel_list R xs ys → Q xs ys) Note how how similar this lemma is in shape to the normal induction for lists above! And indeed, if we choose Q xs ys ↔ (P xs ∧ xs = ys) and R x y ↔ (x = y), then we obtain exactly that. In that sense, the relation induction is a generalization of the normal induction. Relation coinduction The same observation can be made in the coinductive world. Here, as well, the codatatype declaration introduces a function rel_llist :: ('a → 'b → bool) → 'a llist → 'b llist → bool which relates lists of the same shape with related elements – only that this one also relates infinite lists, and therefore is a coinductive relation. The corresponding rule for proof by coinduction is not surprising and should remind you of bisimulation, too: (∀xs ys. R xs ys → (xs = lnil) = (ys = lnil) ∧ [...]

Coinduction in Coq and Isabelle

Thu, 27 Jul 2017 16:24:03 -0400

The DeepSpec Summer School is almost over, and I have had a few good discussions. One revolved around coinduction: What is it, how does it differ from induction, and how do you actually prove something. In the course of the discussion, I came up with a very simple coinductive exercise, and solved it both in Coq and Isabelle The task Define the extended natural numbers coinductively. Define the min  function and the ≤ relation. Show that min(n, m)≤n holds. Coq The definitions are straight forward. Note that in Coq, we use the same command to define a coinductive data type and a coinductively defined relation: CoInductive ENat := | N : ENat | S : ENat -> ENat. CoFixpoint min (n : ENat) (m : ENat) :=match n, m with | S n', S m' => S (min n' m') | _, _ => N end. CoInductive le : ENat -> ENat -> Prop := | leN : forall m, le N m | leS : forall n m, le n m -> le (S n) (S m). The lemma is specified as Lemma min_le: forall n m, le (min n m) n. and the proof method of choice to show that some coinductive relation holds, is cofix. One would wish that the following proof would work: Lemma min_le: forall n m, le (min n m) n. Proof. cofix. destruct n, m. * apply leN. * apply leN. * apply leN. * apply leS. apply min_le. Qed. but we get the error message Error: In environment min_le : forall n m : ENat, le (min n m) n Unable to unify "le N ?M170" with "le (min N N) N Effectively, as Coq is trying to figure out whether our proof is correct, i.e. type-checks, it stumbled on the equation min N N = N, and like a kid scared of coinduction, it did not dare to “run” the min function. The reason it does not just “run” a CoFixpoint is that doing so too daringly might simply not terminate. So, as Adam explains in a chapter of his book, Coq reduces a cofixpoint only when it is the scrutinee of a match statement. So we need to get a match statement in place. We can do so with a helper function: Definition evalN (n : ENat) := match n with | N => N | S n => S n end. Lemma evalN_eq : forall n, evalN n = n. Proof. intros. destruct n; reflexivity. Qed. This function does not really do anything besides nudging Coq to actually evaluate its argument to a constructor (N or S _). We can use it in the proof to guide Coq, and the following goes through: Lemma min_le: forall n m, le (min n m) n. Proof. cofix. destruct n, m; rewrite <- evalN_eq with (n := min _ _). * apply leN. * apply leN. * apply leN. * apply leS. apply min_le. Qed. Isabelle In Isabelle, definitions and types are very different things, so we use different commands to define ENat and le: theory ENat imports Main begin codatatype ENat = N | S ENat primcorec min where "min n m = (case n of N ⇒ N | S n' ⇒ (case m of N ⇒ N | S m' ⇒ S (min n' m')))" coinductive le where leN: "le N m" | leS: "le n m ⟹ le (S n) (S m)" There are actually many ways of defining min; I chose the one most similar to the one above. For more details, see the corec tutorial. Now to the proof: lemma min_le: "le (min n m) n" proof (coinduction arbitrary: n m) case le show ?case proof(cases n) case N then show ?thesis by simp next case (S n') then show ?thesis proof(cases m) case N then show ?thesis by simp next case (S m') with ‹n = _› show ?thesis unfolding min.code[where n = n and m = m] by auto qed qed qed The coinduction proof methods produces this goal: proof (state) goal (1 subgoal): 1. ⋀n m. (∃m'. min n m = N ∧ n = m') ∨ (∃n' m'. min n m = S n' ∧ n = S m' ∧ ((∃n m. n' = min n m ∧ m' = n) ∨ le n' m')) I chose to spell the proof out in the Isar proof language, where [...]

The Micro Two Body Problem

Thu, 06 Jul 2017 16:27:46 +0100

Inspired by recent PhD comic “Academic Travel” and not-so-recent xkcd comic “Movie Narrative Charts”, I created the following graphics, which visualizes the travels of an academic couple over the course of 10 months (place names anonymized).


Two bodies traveling the world

The perils of live demonstrations

Fri, 23 Jun 2017 16:54:36 -0700

Yesterday, I was giving a talk at the The South SF Bay Haskell User Group about how implementing lock-step simulation is trivial in Haskell and how Chris Smith and me are using this to make CodeWorld even more attractive to students. I gave the talk before, at Compose::Conference in New York City earlier this year, so I felt well prepared. On the flight to the West Coast I slightly extended the slides, and as I was too cheap to buy in-flight WiFi, I tested them only locally.

So I arrived at the offices of Target1 in Sunnyvale, got on the WiFi, uploaded my slides, which are in fact one large interactive CodeWorld program, and tried to run it. But I got a type error…

Turns out that the API of CodeWorld was changed just the day before:

commit 054c811b494746ec7304c3d495675046727ab114
Author: Chris Smith 
Date:   Wed Jun 21 23:53:53 2017 +0000

    Change dilated to take one parameter.
    Function is nearly unused, so I'm not concerned about breakage.
    This new version better aligns with standard educational usage,
    in which "dilation" means uniform scaling.  Taken as a separate
    operation, it commutes with rotation, and preserves similarity
    of shapes, neither of which is true of scaling in general.

Ok, that was quick to fix, and the CodeWorld server started to compile my code, and compiled, and aborted. It turned out that my program, presumably the larges CodeWorld interaction out there, hit the time limit of the compiler.

Luckily, Chris Smith just arrived at the venue, and he emergency-bumped the compiler time limit. The program compiled and I could start my presentation.

Unfortunately, the biggest blunder was still awaiting for me. I came to the slide where two instances of pong are played over a simulated network, and my point was that the two instances are perfectly in sync. Unfortunately, they were not. I guess it did support my point that lock-step simulation can easily go wrong, but it really left me out in the rain there, and I could not explain it – I did not modify this code since New York, and there it worked flawless2. In the end, I could save my face a bit by running the real pong game against an attendee over the network, and no desynchronisation could be observed there.

Today I dug into it and it took me a while, and it turned out that the problem was not in CodeWorld, or the lock-step simulation code discussed in our paper about it, but in the code in my presentation that simulated the delayed network messages; in some instances it would deliver the UI events in different order to the two simulated players, and hence cause them do something different. Phew.

  1. Yes, the retail giant. Turns out that they have a small but enthusiastic Haskell-using group in their IT department.

  2. I hope the video is going to be online soon, then you can check for yourself.

Farewall green cap

Fri, 05 May 2017 18:13:40 -0400

For the last two years, I was known among swing dancers for my green flat cap:


Monti, a better model than me

This cap was very special: It was a gift from a good friend who sewed it by hand from what used to be a table cloth of my deceased granny, and it has traveled with me to many corners of the world.

Just like last week, when I was in Paris where I attended the Charleston class of Romuald and Laura on Saturday (April 29). The following Tuesday I went to a Swing Social and wanted to put on the hat, and noticed that it was gone. The next day I bugged the manager and the caretaker of the venue of the class (Salles Sainte-Roche), and it seems that the hat was still there, that morning, im Salle Kurtz1, but when I went there it was gone.

And that is sad.


The last picture with the hat

  1. How fitting, given that my granny’s maiden name is Kurz.

ghc-proofs rules more now

Thu, 27 Apr 2017 23:11:38 -0400

A few weeks ago I blogged about an experiment of mine, where I proved equalities of Haskell programs by (ab)using the GHC simplifier. For more details, please see that post, or the video of my talk at the Zürich Haskell User Group, but one reason why this approach has any chance of being useful is the compiler’s support for rewrite rules. Rewrite rules are program equations that the programmer specifies in the source file, and which the compiler then applies, from left to right, whenever some intermediate code matches the left-hand side of the equation. One such rule, for example, is {-# RULES "foldr/nil" forall k z. foldr k z [] = z #-} taken right out of the standard library. In my blog post I went through the algebraic laws that a small library of mine, successors, should fulfill, and sure enough, once I got to more interesting proofs, they would not go through just like that. At that point I had to add additional rules to the file I was editing, which helped the compiler to finish the proofs. Some of these rules were simple like {-# RULES "mapFB/id" forall c. mapFB c (\x -> x) = c #-} {-# RULES "foldr/nil" forall k n. GHC.Base.foldr k n [] = n #-} {-# RULES "foldr/undo" forall xs. GHC.Base.foldr (:) [] xs = xs #-} and some are more intricate like {-# RULES "foldr/mapFB" forall c f g n1 n2 xs. GHC.Base.foldr (mapFB c f) n1 (GHC.Base.foldr (mapFB (:) g) n2 xs) = GHC.Base.foldr (mapFB c (f.g)) (GHC.Base.foldr (mapFB c f) n1 n2) xs #-} But there is something fishy going on here: The foldr/nil rule is identical to a rule in the standard library! I should not have to add to my file that as I am proving things. So I knew that the GHC plugin, which I wrote to do these proofs, was doing something wrong, but I did not investigate for a while. I returned to this problem recetly, and with the efficient and quick help of Simon Peyton Jones, I learned what I was doing wrong.1 After fixing it, I could remove all the simple rules from the files with my proofs. And to my surprise, I could remove the intricate rule as well! So with this bug fixed, ghc-proofs is able to prove all the Functor, Applicative and Monad rules of the Succs functor without any additional rewrite rules, as you can see in the example file! (I still have to strategically place seqs in a few places.) That’s great, isn’t it! Yeah, sure. But having to introduce the rules at that point provided a very good narrative in my talk, so when I will give a similar talk next week in Pairs (actually, twice, first at the university and then at the Paris Haskell meetup, I will have to come up with a different example that calls for additional rewrite rules. In related news: Since the last blog post, ghc-proofs learned to interpret proof specifications like applicative_law4 :: Succs (a -> b) -> a -> Proof applicative_law4 u y = u <*> pure y === pure ($ y) <*> u where it previously only understood applicative_law4 = (\ u (y::a) -> u <*> (pure y :: Succs a)) === (\ u (y::a) -> pure ($ y) <*> u) I am not sur if this should be uploaded to Hackage, but I invite you to play around with the GitHub version of ghc-proofs. In short: I did not initialize the simplifier with the right InScopeSet, so RULES about functions defined in the current module were not always applied, and I did not feed the eps_rules to the simplifier, which contains all the rules found in imported packages, such as base.↩ [...]