Subscribe: nomeata’s mind shares - English
Language: English
Tags:
analyseexpr  case  code  codeworld  coq  flag  forall  function  ghc  good  haskell  isabelle  nat  proof  rules
Rate this Feed

Feed Details and Statistics
Preview: nomeata’s mind shares - English

# nomeata’s mind shares

Interleaving normalizing reduction strategies

Thu, 15 Feb 2018 14:17:58 -0500

A little, not very significant, observation about lambda calculus and reduction strategies.

A reduction strategy determines, for every lambda term with redexes left, which redex to reduce next. A reduction strategy is normalizing if this procedure terminates for every lambda term that has a normal form.

A fun fact is: If you have two normalizing reduction strategies s1 and s2, consulting them alternately may not yield a normalizing strategy.

Here is an example. Consider the lambda-term o = (λx.xxx), and note that oo → ooo → oooo → …. Let Mi = (λx.(λx.x))(oooo) (with i ocurrences of o). Mi has two redexes, and reduces to either (λx.x) or Mi + 1. In particular, Mi has a normal form.

The two reduction strategies are:

• s1, which picks the second redex if given Mi for an even i, and the first (left-most) redex otherwise.
• s2, which picks the second redex if given Mi for an odd i, and the first (left-most) redex otherwise.

Both stratgies are normalizing: If during a reduction we come across Mi, then the reduction terminates in one or two steps; otherwise we are just doing left-most reduction, which is known to be normalizing.

But if we alternatingly consult s1 and s2 while trying to reduce M2, we get the sequence

M2 → M3 → M4 → …

which shows that this strategy is not normalizing.

Afterthought: The interleaved strategy is not actually a reduction strategy in the usual definition, as it not a pure (stateless) function from lambda term to redex.

The magic “Just do it” type class

Fri, 02 Feb 2018 14:01:11 -0500

Finding bugs in Haskell code by proving it

Tue, 05 Dec 2017 09:17:43 -0500

Existence and Termination

Sat, 25 Nov 2017 15:54:57 -0500

I recently had some intense discussions that revolved around issues of existence and termination of functions in Coq, about axioms and what certain proofs actually mean. We came across some interesting questions and thoughts that I’ll share with those of my blog readers with an interest in proofs and interactive theorem proving. tl;dr It can be meaningful to assume the existence of a function in Coq, and under that assumption prove its termination and other properties. Axioms and assumptions are logically equivalent. Unsound axioms do not necessary invalidate a theory development, when additional meta-rules govern their use. Preparation Our main running example is the infamous Collatz series. Starting at any natural number, the next is calculated as follow: Require Import Coq.Arith.Arith. Definition next (n : nat) :nat := if Nat.even n then n / 2 else 3*n + 1. If you start with some positive number, you are going to end up reaching 1 eventually. Or are you? So far nobody has found a number where that does not happen, but we also do not have a proof that it never happens. It is one of the great mysteries of Mathematics, and if you can solve it, you’ll be famous. A failed definition But assume we had an idea on how to prove that we are always going to reach 1, and tried to formalize this in Coq. One attempt might be to write Fixpoint good (n : nat) : bool := if n <=? 1 then true else good (next n). Theorem collatz: forall n, good n = true. Proof. (* Insert genius idea here.*) Qed. Unfortunately, this does not work: Coq rejects this recursive definition of the function good, because it does not see how that is a terminating function, and Coq requires all such recursive function definitions to be obviously terminating – without this check there would be a risk of Coq’s type checking becoming incomplete or its logic being unsound. The idiomatic way to avoid this problem is to state good as an inductive predicate... but let me explore another idea here. Working with assumptions What happens if we just assume that the function good, described above, exists, and then perform our proof: Theorem collatz (good : nat -> bool) (good_eq : forall n, good n = if n <=? 1 then true else good (next n)) : forall n, good n = true. Proof. (* Insert genius idea here.*) Qed. Would we accept this as a proof of Collatz’ conjecture? Or did we just assume what we want to prove, in which case the theorem is vacuously true, but we just performed useless circular reasoning? Upon close inspection, we find that the assumptions of the theorem (good and good_eq) are certainly satisfiable: Definition trivial (n: nat) : bool := true. Lemma trivial_eq: forall n, trivial n = if n <=? 1 then true else trivial (next n). Proof. intro; case (n <=? 1); reflexivity. Qed. Lemma collatz_trivial: forall n, trivial n = true. Proof. apply (collatz trivial trivial_eq). Qed. So clearly there exists a function of type nat -> bool that satisfies the assumed equation. This is good, because it means that the collatz theorem is not simply assuming False! Some (including me) might already be happy with this theorem and proof, as it clearly states: “Every function that follows the Collatz series eventually reaches 1”. Others might still not be at ease with such a proof. Above we have seen that we cannot define the real collatz series in Coq. How can the collatz theorem say something that is not definable? Classical reasoning One possible way of getting some assurance it to define good as a classical function. The logic of Coq can be extended with the law of the excluded middle without making it inconsistent, and with that axiom, we can define a version of good that is pretty convincing (sorry for the slightly messy proof): Require Import Coq.Logic.ClassicalDescription. Require Import Omega. Definition classical_good (n:nat) : bool := if excluded_middle_informative (exists m, Nat.iter m next n <= 1) then true else false. Lemma iter_s[...]

Isabelle functions: Always total, sometimes undefined

Thu, 12 Oct 2017 13:54:20 -0400

Often, when I mention how things work in the interactive theorem prover [Isabelle/HOL] (in the following just “Isabelle”1) to people with a strong background in functional programming (whether that means Haskell or Coq or something else), I cause confusion, especially around the issue of what is a function, are function total and what is the business with undefined. In this blog post, I want to explain some these issues, aimed at functional programmers or type theoreticians. Note that this is not meant to be a tutorial; I will not explain how to do these things, and will focus on what they mean. HOL is a logic of total functions If I have a Isabelle function f :: a ⇒ b between two types a and b (the function arrow in Isabelle is ⇒, not →), then – by definition of what it means to be a function in HOL – whenever I have a value x :: a, then the expression f x (i.e. f applied to x) is a value of type b. Therefore, and without exception, every Isabelle function is total. In particular, it cannot be that f x does not exist for some x :: a. This is a first difference from Haskell, which does have partial functions like spin :: Maybe Integer -> Bool spin (Just n) = spin (Just (n+1)) Here, neither the expression spin Nothing nor the expression spin (Just 42) produce a value of type Bool: The former raises an exception (“incomplete pattern match”), the latter does not terminate. Confusingly, though, both expressions have type Bool. Because every function is total, this confusion cannot arise in Isabelle: If an expression e has type t, then it is a value of type t. This trait is shared with other total systems, including Coq. Did you notice the emphasis I put on the word “is” here, and how I deliberately did not write “evaluates to” or “returns”? This is because of another big source for confusion: Isabelle functions do not compute We (i.e., functional programmers) stole the word “function” from mathematics and repurposed it2. But the word “function”, in the context of Isabelle, refers to the mathematical concept of a function, and it helps to keep that in mind. What is the difference? A function a → b in functional programming is an algorithm that, given a value of type a, calculates (returns, evaluates to) a value of type b. A function a ⇒ b in math (or Isabelle) associates with each value of type a a value of type b. For example, the following is a perfectly valid function definition in math (and HOL), but could not be a function in the programming sense: definition foo :: "(nat ⇒ real) ⇒ real" where "foo seq = (if convergent seq then lim seq else 0)" This assigns a real number to every sequence, but it does not compute it in any useful sense. From this it follows that Isabelle functions are specified, not defined Consider this function definition: fun plus :: "nat ⇒ nat ⇒ nat" where "plus 0 m = m" | "plus (Suc n) m = Suc (plus n m)" To a functional programmer, this reads plus is a function that analyses its first argument. If that is 0, then it returns the second argument. Otherwise, it calls itself with the predecessor of the first argument and increases the result by one. which is clearly a description of a computation. But to Isabelle, the above reads plus is a binary function on natural numbers, and it satisfies the following two equations: … And in fact, it is not so much Isabelle that reads it this way, but rather the fun command, which is external to the Isabelle logic. The fun command analyses the given equations, constructs a non-recursive definition of plus under the hood, passes that to Isabelle and then proves that the given equations hold for plus. One interesting consequence of this is that different specifications can lead to the same functions. In fact, if we would define plus' by recursing on the second argument, we’d obtain the the same function (i.e. plus = plus' is a theorem, and there would be no way of telling the two apart)[...]

e.g. in TeX

Sun, 08 Oct 2017 15:08:13 -0400

When I learned TeX, I was told to not write `e.g. something`, because TeX would think the period after the “g” ends a sentence, and introduce a wider, inter-sentence space. Instead, I was to write `e.g.\␣`.

Years later, I learned from a convincing, but since forgotten source, that in fact `e.g.\@` is the proper thing to write. I vaguely remembering that `e.g.\␣` supposedly affected the inter-word space in some unwanted way. So I did that for many years.

Until I recently was called out for doing it wrong, and that infact `e.g.\␣` is the proper way. This was supported by a StackExchange answer written by a LaTeX authority and backed by a reference to documentation. The same question has, however, another answer by another TeX authority, backed by an analysis of the implementation, which concludes that `e.g.\@` is proper.

What now? I guess I just have to find it out myself.

(image)

The problem and two solutions

The above image shows three variants: The obviously broken version with `e.g.`, and the two contesting variants to fix it. Looks like they yield equal results!

So maybe the difference lies in how `\@` and `\␣` react when the line length changes, and the word wrapping require differences in the inter-word spacing. Will there be differences? Let’s see;

(image)

Expanding whitespace, take 1

(image)

Expanding whitespace, take 2

I cannot see any difference. But the inter-sentence whitespace ate most of the expansion. Is there a difference visible if we have only inter-word spacing in the line?

(image)

Expanding whitespace, take 3

(image)

Expanding whitespace, take 4

Again, I see the same behaviour.

Conclusion: It does not matter, but `e.g.\␣` is less hassle when using lhs2tex than `e.g.\@` (which has to be escaped as `e.g.\@@`), so the winner is `e.g.\␣`!

(Unless you put it in a macro, then `\@` might be preferable, and it is still needed between a captial letter and a sentence period.)

Less parentheses

Sun, 10 Sep 2017 11:10:16 +0100

Compose Conference talk video online

Sun, 20 Aug 2017 20:50:10 +0200

Three months ago, I gave a talk at the Compose::Conference in New York about how Chris Smith and I added the ability to create networked multi-user programs to the educational Haskell programming environment CodeWorld, and finally the recording of the talk is available on YouTube (and is being discussed on reddit):

src="https://www.youtube.com/embed/2kKvVe673MA?rel=0?ecver=2" width="640" height="360" frameborder="0" style="display: block; margin-left: auto; margin-right: auto">

It was the talk where I got the most positive feedback afterwards, and I think this is partly due to how I created the presentation: Instead of showing static slides, I programmed the complete visual display from scratch as an “interaction” within the CodeWorld environment, including all transitions, an working embedded game of Pong and a simulated multi-player environments with adjustable message delays. I have put the code for the presentation online.

Chris and I have written about this for ICFP'17, and thanks to open access I can actually share the paper freely with you and under a CC license. If you come to Oxford you can see me perform a shorter version of this talk again.

Communication Failure

Sun, 06 Aug 2017 11:14:05 -0400

How is coinduction the dual of induction?

Thu, 27 Jul 2017 22:05:32 -0400

Earlier today, I demonstrated how to work with coinduction in the theorem provers Isabelle, Coq and Agda, with a very simple example. This reminded me of a discussion I had in Karlsruhe with my then colleague Denis Lohner: If coinduction is the dual of induction, why do the induction principles look so different? I like what we observed there, so I’d like to share this. The following is mostly based on my naive understanding of coinduction based on what I observe in the implementation in Isabelle. I am sure that a different, more categorial presentation of datatypes (as initial resp. terminal objects in some category of algebras) makes the duality more obvious, but that does not necessarily help the working Isabelle user who wants to make sense of coninduction. Inductive lists I will use the usual polymorphic list data type as an example. So on the one hand, we have normal, finite inductive lists: datatype 'a list = nil | cons (hd : 'a) (tl : "'a list") with the well-known induction principle that many of my readers know by heart (syntax slightly un-isabellized): P nil → (∀x xs. P xs → P (cons x xs)) → ∀ xs. P xs Coinductive lists In contrast, if we define our lists coinductively to get possibly infinite, Haskell-style lists, by writing codatatype 'a llist = lnil | lcons (hd : 'a) (tl : "'a llist") we get the following coinduction principle: (∀ xs ys. R xs ys' → (xs = lnil) = (ys = lnil) ∧ (xs ≠ lnil ⟶ ys' ≠ lnil ⟶ hd xs = hd ys ∧ R (tl xs) (tl ys))) → → (∀ xs ys. R xs ys → xs = ys) This is less scary that it looks at first. It tell you “if you give me a relation R between lists which implies that either both lists are empty or both lists are nonempty, and furthermore if both are non-empty, that they have the same head and tails related by R, then any two lists related by R are actually equal.” If you think of the infinte list as a series of states of a computer program, then this is nothing else than a bisimulation. So we have two proof principles, both of which make intuitive sense. But how are they related? They look very different! In one, we have a predicate P, in the other a relation R, to point out just one difference. Relation induction To see how they are dual to each other, we have to recognize that both these theorems are actually specializations of a more general (co)induction principle. The datatype declaration automatically creates a relator: rel_list :: ('a → 'b → bool) → 'a list → 'b list → bool The definition of rel_list R xs ys is that xs and ys have the same shape (i.e. length), and that the corresponding elements are pairwise related by R. You might have defined this relation yourself at some time, and if so, you probably introduced it as an inductive predicate. So it is not surprising that the following induction principle characterizes this relation: Q nil nil → (∀x xs y ys. R x y → Q xs ys → Q (cons x xs) (cons y ys)) → (∀xs ys → rel_list R xs ys → Q xs ys) Note how how similar this lemma is in shape to the normal induction for lists above! And indeed, if we choose Q xs ys ↔ (P xs ∧ xs = ys) and R x y ↔ (x = y), then we obtain exactly that. In that sense, the relation induction is a generalization of the normal induction. Relation coinduction The same observation can be made in the coinductive world. Here, as well, the codatatype declaration introduces a function rel_llist :: ('a → 'b → bool) → 'a llist → 'b llist → bool which relates lists of the same shape with related elements – only that this one also relates infinite lists, and therefore is a coinductive relation. The corresponding rule for proof by coinduction is not surprising and should remind you of bisimulation, too: (∀xs ys. R xs ys → (xs = lnil) = (ys = lnil) ∧ [...]

Coinduction in Coq and Isabelle

Thu, 27 Jul 2017 16:24:03 -0400

The DeepSpec Summer School is almost over, and I have had a few good discussions. One revolved around coinduction: What is it, how does it differ from induction, and how do you actually prove something. In the course of the discussion, I came up with a very simple coinductive exercise, and solved it both in Coq and Isabelle The task Define the extended natural numbers coinductively. Define the min  function and the ≤ relation. Show that min(n, m)≤n holds. Coq The definitions are straight forward. Note that in Coq, we use the same command to define a coinductive data type and a coinductively defined relation: CoInductive ENat := | N : ENat | S : ENat -> ENat. CoFixpoint min (n : ENat) (m : ENat) :=match n, m with | S n', S m' => S (min n' m') | _, _ => N end. CoInductive le : ENat -> ENat -> Prop := | leN : forall m, le N m | leS : forall n m, le n m -> le (S n) (S m). The lemma is specified as Lemma min_le: forall n m, le (min n m) n. and the proof method of choice to show that some coinductive relation holds, is cofix. One would wish that the following proof would work: Lemma min_le: forall n m, le (min n m) n. Proof. cofix. destruct n, m. * apply leN. * apply leN. * apply leN. * apply leS. apply min_le. Qed. but we get the error message Error: In environment min_le : forall n m : ENat, le (min n m) n Unable to unify "le N ?M170" with "le (min N N) N Effectively, as Coq is trying to figure out whether our proof is correct, i.e. type-checks, it stumbled on the equation min N N = N, and like a kid scared of coinduction, it did not dare to “run” the min function. The reason it does not just “run” a CoFixpoint is that doing so too daringly might simply not terminate. So, as Adam explains in a chapter of his book, Coq reduces a cofixpoint only when it is the scrutinee of a match statement. So we need to get a match statement in place. We can do so with a helper function: Definition evalN (n : ENat) := match n with | N => N | S n => S n end. Lemma evalN_eq : forall n, evalN n = n. Proof. intros. destruct n; reflexivity. Qed. This function does not really do anything besides nudging Coq to actually evaluate its argument to a constructor (N or S _). We can use it in the proof to guide Coq, and the following goes through: Lemma min_le: forall n m, le (min n m) n. Proof. cofix. destruct n, m; rewrite <- evalN_eq with (n := min _ _). * apply leN. * apply leN. * apply leN. * apply leS. apply min_le. Qed. Isabelle In Isabelle, definitions and types are very different things, so we use different commands to define ENat and le: theory ENat imports Main begin codatatype ENat = N | S ENat primcorec min where "min n m = (case n of N ⇒ N | S n' ⇒ (case m of N ⇒ N | S m' ⇒ S (min n' m')))" coinductive le where leN: "le N m" | leS: "le n m ⟹ le (S n) (S m)" There are actually many ways of defining min; I chose the one most similar to the one above. For more details, see the corec tutorial. Now to the proof: lemma min_le: "le (min n m) n" proof (coinduction arbitrary: n m) case le show ?case proof(cases n) case N then show ?thesis by simp next case (S n') then show ?thesis proof(cases m) case N then show ?thesis by simp next case (S m') with ‹n = _› show ?thesis unfolding min.code[where n = n and m = m] by auto qed qed qed The coinduction proof methods produces this goal: proof (state) goal (1 subgoal): 1. ⋀n m. (∃m'. min n m = N ∧ n = m') ∨ (∃n' m'. min n m = S n' ∧ n = S m' ∧ ((∃n m. n' = min n m ∧ m' = n) ∨ le n' m')) I chose to spell the proof out in the Isar proof language, where [...]

The Micro Two Body Problem

Thu, 06 Jul 2017 16:27:46 +0100

Inspired by recent PhD comic “Academic Travel” and not-so-recent xkcd comic “Movie Narrative Charts”, I created the following graphics, which visualizes the travels of an academic couple over the course of 10 months (place names anonymized).

(image)

Two bodies traveling the world

The perils of live demonstrations

Fri, 23 Jun 2017 16:54:36 -0700

Yesterday, I was giving a talk at the The South SF Bay Haskell User Group about how implementing lock-step simulation is trivial in Haskell and how Chris Smith and me are using this to make CodeWorld even more attractive to students. I gave the talk before, at Compose::Conference in New York City earlier this year, so I felt well prepared. On the flight to the West Coast I slightly extended the slides, and as I was too cheap to buy in-flight WiFi, I tested them only locally.

So I arrived at the offices of Target1 in Sunnyvale, got on the WiFi, uploaded my slides, which are in fact one large interactive CodeWorld program, and tried to run it. But I got a type error…

Turns out that the API of CodeWorld was changed just the day before:

``````commit 054c811b494746ec7304c3d495675046727ab114
Author: Chris Smith
Date:   Wed Jun 21 23:53:53 2017 +0000

Change dilated to take one parameter.

Function is nearly unused, so I'm not concerned about breakage.
This new version better aligns with standard educational usage,
in which "dilation" means uniform scaling.  Taken as a separate
operation, it commutes with rotation, and preserves similarity
of shapes, neither of which is true of scaling in general.``````

Ok, that was quick to fix, and the CodeWorld server started to compile my code, and compiled, and aborted. It turned out that my program, presumably the larges CodeWorld interaction out there, hit the time limit of the compiler.

Luckily, Chris Smith just arrived at the venue, and he emergency-bumped the compiler time limit. The program compiled and I could start my presentation.

Unfortunately, the biggest blunder was still awaiting for me. I came to the slide where two instances of pong are played over a simulated network, and my point was that the two instances are perfectly in sync. Unfortunately, they were not. I guess it did support my point that lock-step simulation can easily go wrong, but it really left me out in the rain there, and I could not explain it – I did not modify this code since New York, and there it worked flawless2. In the end, I could save my face a bit by running the real pong game against an attendee over the network, and no desynchronisation could be observed there.

Today I dug into it and it took me a while, and it turned out that the problem was not in CodeWorld, or the lock-step simulation code discussed in our paper about it, but in the code in my presentation that simulated the delayed network messages; in some instances it would deliver the UI events in different order to the two simulated players, and hence cause them do something different. Phew.

1. Yes, the retail giant. Turns out that they have a small but enthusiastic Haskell-using group in their IT department.

2. I hope the video is going to be online soon, then you can check for yourself.

Farewall green cap

Fri, 05 May 2017 18:13:40 -0400

For the last two years, I was known among swing dancers for my green flat cap:

(image)

Monti, a better model than me

This cap was very special: It was a gift from a good friend who sewed it by hand from what used to be a table cloth of my deceased granny, and it has traveled with me to many corners of the world.

Just like last week, when I was in Paris where I attended the Charleston class of Romuald and Laura on Saturday (April 29). The following Tuesday I went to a Swing Social and wanted to put on the hat, and noticed that it was gone. The next day I bugged the manager and the caretaker of the venue of the class (Salles Sainte-Roche), and it seems that the hat was still there, that morning, im Salle Kurtz1, but when I went there it was gone.

(image)

The last picture with the hat

1. How fitting, given that my granny’s maiden name is Kurz.

ghc-proofs rules more now

Thu, 27 Apr 2017 23:11:38 -0400