Added By: Feedage Forager | |
Language: German | |
Tags: | |
analyseexpr code flag ghc haskell map map map parser proof proving proof succs succs succs … ghc proof → | |
Rate this Feed |
Comments (0) |
Feed Details and Statistics |
Fri, 02 Feb 2018 14:01:11 -0500
One of the great strengths of strongly typed functional programming is that it allows type driven development. When I have some non-trivial function to write, I first write its type signature, and then the writing the implementation often very obvious. Once more, I am feeling silly In fact, it often is completely mechanical. Consider the following function: foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b)) This is somewhat like the bind for a combination of the error monad and the reader monad, and remembers the intermediate result, but that doesn’t really matter now. What matters is that once I wrote that type signature, I feel silly having to also write the code, because there isn’t really anything interesting about that. Instead, I’d like to tell the compiler to just do it for me! I want to be able to write foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b)) foo = justDoIt And now I can! Assuming I am using GHC HEAD (or eventually GHC 8.6), I can run cabal install ghc-justdoit, and then the following code actually works: {-# OPTIONS_GHC -fplugin=GHC.JustDoIt.Plugin #-} import GHC.JustDoIt foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b)) foo = justDoIt What is this justDoIt? *GHC.LJT GHC.JustDoIt> :browse GHC.JustDoIt class JustDoIt a justDoIt :: JustDoIt a => a (…) :: JustDoIt a => a Note that there are no instances for the JustDoIt class -- they are created, on the fly, by the GHC plugin GHC.JustDoIt.Plugin. During type-checking, it looks as these JustDoIt t constraints and tries to construct a term of type t. It is based on Dyckhoff’s LJT proof search in intuitionistic propositional calculus, which I have implemented to work directly on GHC’s types and terms (and I find it pretty slick). Those who like Unicode can write (…) instead. What is supported right now? Because I am working directly in GHC’s representation, it is pretty easy to support user-defined data types and newtypes. So it works just as well for data Result a b = Failure a | Success b newtype ErrRead r e a = ErrRead { unErrRead :: r -> Result e a } foo2 :: ErrRead r e a -> (a -> ErrRead r e b) -> ErrRead r e (a,b) foo2 = (…) It doesn’t infer coercions or type arguments or any of that fancy stuff, and carefully steps around anything that looks like it might be recursive. How do I know that it creates a sensible implementation? You can check the generated Core using -ddump-simpl of course. But it is much more convenient to use inspection-testing to test such things, as I am doing in the Demo file, which you can skim to see a few more examples of justDoIt in action. I very much enjoyed reaping the benefits of the work I put into inspection-testing, as this is so much more convenient than manually checking the output. Is this for real? Should I use it? Of course you are welcome to play around with it, and it will not launch any missiles, but at the moment, I consider this a prototype that I created for two purposes: To demonstrates that you can use type checker plugins for program synthesis. Depending on what you need, this might allow you to provide a smoother user experience than the alternatives, which are: Preprocessors Template Haskell Generic programming together with type-level computation (e.g. generic-lens) GHC Core-to-Core plugins In order to make this viable, I slightly changed the API for type checker plugins, which are now free to produce arbitrary Core terms as they solve constraints. To advertise the idea of taking type-driven computation to its logical conclusion and free users from having to implement functions that they have already specified sufficiently precisely by their type. What needs to happen for this to become real? A bunch of things: The LJT implementation is somewhat neat, but I probably did not implement backtracking properly, and there might be more bugs. The implementation is very much unoptimized. For this to be practically useful, the user needs to be able to use it with confidence. In particular, the user should be [...]Tue, 05 Dec 2017 09:17:43 -0500
Last week, I wrote a small nifty tool called bisect-binary, which semi-automates answering the question “To what extent can I fill this file up with zeroes and still have it working”. I wrote it it in Haskell, and part of the Haskell code, in the Intervals.hs module, is a data structure for “subsets of a file” represented as a sorted list of intervals: data Interval = I { from :: Offset, to :: Offset } newtype Intervals = Intervals [Interval] The code is the kind of Haskell code that I like to write: A small local recursive function, a few guards to case analysis, and I am done: intersect :: Intervals -> Intervals -> Intervals intersect (Intervals is1) (Intervals is2) = Intervals $ go is1 is2 where go _ [] = [] go [] _ = [] go (i1:is1) (i2:is2) -- reorder for symmetry | to i1 < to i2 = go (i2:is2) (i1:is1) -- disjoint | from i1 >= to i2 = go (i1:is1) is2 -- subset | to i1 == to i2 = I f' (to i2) : go is1 is2 -- overlapping | otherwise = I f' (to i2) : go (i1 { from = to i2} : is1) is2 where f' = max (from i1) (from i2) But clearly, the code is already complicated enough so that it is easy to make a mistake. I could have put in some QuickCheck properties to test the code, I was in proving mood... Now available: Formal Verification for Haskell Ten months ago I complained that there was no good way to verify Haskell code (and created the nifty hack ghc-proofs). But things have changed since then, as a group at UPenn (mostly Antal Spector-Zabusky, Stephanie Weirich and myself) has created hs-to-coq: a translator from Haskell to the theorem prover Coq. We have used hs-to-coq on various examples, as described in our CPP'18 paper, but it is high-time to use it for real. The easiest way to use hs-to-coq at the moment is to clone the repository, copy one of the example directories (e.g. examples/successors), place the Haskell file to be verified there and put the right module name into the Makefile. I also commented out parts of the Haskell file that would drag in non-base dependencies. Massaging the translation Often, hs-to-coq translates Haskell code without a hitch, but sometimes, a bit of help is needed. In this case, I had to specify three so-called edits: The Haskell code uses Intervals both as a name for a type and for a value (the constructor). This is fine in Haskell, which has separate value and type namespaces, but not for Coq. The line rename value Intervals.Intervals = ival changes the constructor name to ival. I use the Int64 type in the Haskell code. The Coq version of Haskell’s base library that comes with hs-to-coq does not support that yet, so I change that via rename type GHC.Int.Int64 = GHC.Num.Int to the normal Int type, which itself is mapped to Coq’s Z type. This is not a perfect fit, and my verification would not catch problems that arise due to the boundedness of Int64. Since none of my code does arithmetic, only comparisons, I am fine with that. The biggest hurdle is the recursion of the local go functions. Coq requires all recursive functions to be obviously (i.e. structurally) terminating, and the go above is not. For example, in the first case, the arguments to go are simply swapped. It is very much not obvious why this is not an infinite loop. I can specify a termination measure, i.e. a function that takes the arguments xs and ys and returns a “size” of type nat that decreases in every call: Add the lengths of xs and ys, multiply by two and add one if the the first interval in xs ends before the first interval in ys. If the problematic function were a top-level function I could tell hs-to-coq about this termination measure and it would use this information to define the function using Program Fixpoint. Unfortunately, go is a local function, so this mechanism is not available to us. If I care more about the verification than about preserving the exact Haskell code, I could easily change the Haskell code to make go a top-level function, but in this case I did not want to c[...]Sun, 10 Sep 2017 11:10:16 +0100
Yesterday, at the Haskell Implementers Workshop 2017 in Oxford, I gave a lightning talk titled ”syntactic musings”, where I presented three possibly useful syntactic features that one might want to add to a language like Haskell. The talked caused quite some heated discussions, and since the Internet likes heated discussion, I will happily share these ideas with you Context aka. Sections This is probably the most relevant of the three proposals. Consider a bunch of related functions, say analyseExpr and analyseAlt, like these: analyseExpr :: Expr -> Expr analyseExpr (Var v) = change v analyseExpr (App e1 e2) = App (analyseExpr e1) (analyseExpr e2) analyseExpr (Lam v e) = Lam v (analyseExpr flag e) analyseExpr (Case scrut alts) = Case (analyseExpr scrut) (analyseAlt <$> alts) analyseAlt :: Alt -> Alt analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e) You have written them, but now you notice that you need to make them configurable, e.g. to do different things in the Var case. You thus add a parameter to all these functions, and hence an argument to every call: type Flag = Bool analyseExpr :: Flag -> Expr -> Expr analyseExpr flag (Var v) = if flag then change1 v else change2 v analyseExpr flag (App e1 e2) = App (analyseExpr flag e1) (analyseExpr flag e2) analyseExpr flag (Lam v e) = Lam v (analyseExpr (not flag) e) analyseExpr flag (Case scrut alts) = Case (analyseExpr flag scrut) (analyseAlt flag <$> alts) analyseAlt :: Flag -> Alt -> Alt analyseAlt flag (dc, pats, e) = (dc, pats, analyseExpr flag e) I find this code problematic. The intention was: “flag is a parameter that an external caller can use to change the behaviour of this code, but when reading and reasoning about this code, flag should be considered constant.” But this intention is neither easily visible nor enforced. And in fact, in the above code, flag does “change”, as analyseExpr passes something else in the Lam case. The idiom is indistinguishable from the environment idiom, where a locally changing environment (such as “variables in scope”) is passed around. So we are facing exactly the same problem as when reasoning about a loop in an imperative program with mutable variables. And we (pure functional programmers) should know better: We cherish immutability! We want to bind our variables once and have them scope over everything we need to scope over! The solution I’d like to see in Haskell is common in other languages (Gallina, Idris, Agda, Isar), and this is what it would look like here: type Flag = Bool section (flag :: Flag) where analyseExpr :: Expr -> Expr analyseExpr (Var v) = if flag then change1 v else change2v analyseExpr (App e1 e2) = App (analyseExpr e1) (analyseExpr e2) analyseExpr (Lam v e) = Lam v (analyseExpr e) analyseExpr (Case scrut alts) = Case (analyseExpr scrut) (analyseAlt <$> alts) analyseAlt :: Alt -> Alt analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e) Now the intention is clear: Within a clearly marked block, flag is fixed and when reasoning about this code I do not have to worry that it might change. Either all variables will be passed to change1, or all to change2. An important distinction! Therefore, inside the section, the type of analyseExpr does not mention Flag, whereas outside its type is Flag -> Expr -> Expr. This is a bit unusual, but not completely: You see precisely the same effect in a class declaration, where the type signature of the methods do not mention the class constraint, but outside the declaration they do. Note that idioms like implicit parameters or the Reader monad do not give the guarantee that the parameter is (locally) constant. More details can be found in the GHC proposal that I prepared, and I invite you to raise concern or voice support there. Curiously, this problem must have bothered me for longer than I remember: I discovered that seven years ago, I wrote a Template Haskell based implementation of this idea in the seal-module package! Less parentheses 1: Bulleted argument li[...]Sun, 20 Aug 2017 20:50:10 +0200
Three months ago, I gave a talk at the Compose::Conference in New York about how Chris Smith and I added the ability to create networked multi-user programs to the educational Haskell programming environment CodeWorld, and finally the recording of the talk is available on YouTube (and is being discussed on reddit):
src="https://www.youtube.com/embed/2kKvVe673MA?rel=0?ecver=2" width="640" height="360" frameborder="0" style="display: block; margin-left: auto; margin-right: auto">It was the talk where I got the most positive feedback afterwards, and I think this is partly due to how I created the presentation: Instead of showing static slides, I programmed the complete visual display from scratch as an “interaction” within the CodeWorld environment, including all transitions, an working embedded game of Pong and a simulated multi-player environments with adjustable message delays. I have put the code for the presentation online.
Chris and I have written about this for ICFP'17, and thanks to open access I can actually share the paper freely with you and under a CC license. If you come to Oxford you can see me perform a shorter version of this talk again.
Sun, 06 Aug 2017 11:14:05 -0400
I am still far from being a professor, but I recently got a glimpse of what awaits you in that role… From: Sebastian R. <…@gmail.com> To: joachim@cis.upenn.edu Subject: re: Errors I've spotted a basic error in your course on Haskell (https://www.seas.upenn.edu/~cis194/fall16/). Before I proceed, it's cool if you're not receptive to errors being indicated; I've come across a number of professors who would rather take offense than admit we're all human and thus capable of making mistakes... My goal is to find a resource that might be useful well into the future, and a good indicator of that is how responsive the author is to change. In your introduction note you have written: n contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library. Howeverm there is no input CodeWorld in the code above. Have you been made aware of this error earlier? Regards, ... Nice. I like when people learn from my lectures. The introduction is a bit werid, but ok, maybe this guy had some bad experiences. Strangley, I don’t see a mistake in the material, so I respond: From: Joachim Breitner joachim@cis.upenn.edu To: Sebastian R. <…@gmail.com> Subject: Re: Errors Dear Sebastian, thanks for pointing out errors. But the first piece of code under “Basic Haskell” starts with {-# LANGUAGE OverloadedStrings #-} import CodeWorld so I am not sure what you are referring to. Note that these are lecture notes, so you have to imagine a lecturer editing code live on stage along with it. If you only have the notes, you might have to infer a few things. Regards, Joachim A while later, I receive this response: From: Sebastian R. <…@gmail.com> To: Joachim Breitner joachim@cis.upenn.edu Subject: Re: Errors Greetings, Joachim. Kindly open the lecture slides and search for "input CodeWorld" to find the error; it is not in the code, but in the paragraph that implicitly refers back to the code. You might note that I quoted this precisely from the lectures... and so I repeat myself... this came from your lectures; they're not my words! In contrast to a classical intro into Haskell, we do not start with numbers, booleans, tuples, lists and strings, but we start with pictures. These are of course library-defined (hence the input CodeWorld) and not part of “the language”. But that does not make them less interesting, and in fact, even the basic boolean type is library defined – it just happens to be the standard library. This time around, I've highlighted the issue. I hope that made it easier for you to spot... Nonetheless, I got my answer. Don't reply if you're going to fight tooth and nail about such a basic fix; it's simply a waste of both of our time. I'd rather learn from somewhere else... On Tue, Aug 1, 2017 at 11:19 PM, Joachim Breitner joachim@cis.upenn.edu wrote: … I am a bit reminded of Sean Spicer … “they’re not my words!” … but clearly I am missing something. And indeed I am: In the code snippet, I wrote – correctly – import CodeWorld, but in the text I had input CodeWorld. I probably did write LaTeX before writing the lecture notes. Well, glad to have that sorted out. I fixed the mistake and wrote back: From: Joachim Breitner joachim@cis.upenn.edu To: Sebastian R. <…@gmail.com> Betreff: Re: Errors Dear Sebastian, nobody is fighting, and I see the mistake now: The problem is not that the line is not in the code, the problem is that there is a typo in the line and I wrote “input” instead of “import”. Thanks for the report, although you did turn it into quite a riddle… a simple “you wrote import when it should have been import” would have been a better user[...]Thu, 27 Jul 2017 22:05:32 -0400
Earlier today, I demonstrated how to work with coinduction in the theorem provers Isabelle, Coq and Agda, with a very simple example. This reminded me of a discussion I had in Karlsruhe with my then colleague Denis Lohner: If coinduction is the dual of induction, why do the induction principles look so different? I like what we observed there, so I’d like to share this. The following is mostly based on my naive understanding of coinduction based on what I observe in the implementation in Isabelle. I am sure that a different, more categorial presentation of datatypes (as initial resp. terminal objects in some category of algebras) makes the duality more obvious, but that does not necessarily help the working Isabelle user who wants to make sense of coninduction. Inductive lists I will use the usual polymorphic list data type as an example. So on the one hand, we have normal, finite inductive lists: datatype 'a list = nil | cons (hd : 'a) (tl : "'a list") with the well-known induction principle that many of my readers know by heart (syntax slightly un-isabellized): P nil → (∀x xs. P xs → P (cons x xs)) → ∀ xs. P xs Coinductive lists In contrast, if we define our lists coinductively to get possibly infinite, Haskell-style lists, by writing codatatype 'a llist = lnil | lcons (hd : 'a) (tl : "'a llist") we get the following coinduction principle: (∀ xs ys. R xs ys' → (xs = lnil) = (ys = lnil) ∧ (xs ≠ lnil ⟶ ys' ≠ lnil ⟶ hd xs = hd ys ∧ R (tl xs) (tl ys))) → → (∀ xs ys. R xs ys → xs = ys) This is less scary that it looks at first. It tell you “if you give me a relation R between lists which implies that either both lists are empty or both lists are nonempty, and furthermore if both are non-empty, that they have the same head and tails related by R, then any two lists related by R are actually equal.” If you think of the infinte list as a series of states of a computer program, then this is nothing else than a bisimulation. So we have two proof principles, both of which make intuitive sense. But how are they related? They look very different! In one, we have a predicate P, in the other a relation R, to point out just one difference. Relation induction To see how they are dual to each other, we have to recognize that both these theorems are actually specializations of a more general (co)induction principle. The datatype declaration automatically creates a relator: rel_list :: ('a → 'b → bool) → 'a list → 'b list → bool The definition of rel_list R xs ys is that xs and ys have the same shape (i.e. length), and that the corresponding elements are pairwise related by R. You might have defined this relation yourself at some time, and if so, you probably introduced it as an inductive predicate. So it is not surprising that the following induction principle characterizes this relation: Q nil nil → (∀x xs y ys. R x y → Q xs ys → Q (cons x xs) (cons y ys)) → (∀xs ys → rel_list R xs ys → Q xs ys) Note how how similar this lemma is in shape to the normal induction for lists above! And indeed, if we choose Q xs ys ↔ (P xs ∧ xs = ys) and R x y ↔ (x = y), then we obtain exactly that. In that sense, the relation induction is a generalization of the normal induction. Relation coinduction The same observation can be made in the coinductive world. Here, as well, the codatatype declaration introduces a function rel_llist :: ('a → 'b → bool) → 'a llist → 'b llist → bool which relates lists of the same shape with related elements – only that this one also relates infinite lists, and therefore is a coinductive relation. The corresponding rule for proof by coinduction is not surprising and should remind you of bisimulation, too: (∀xs ys. R xs ys → (xs = lnil) = (ys = lnil) ∧ (xs ≠ lnil ⟶ ys ≠ lnil ⟶ Q (hd xs) (hd ys) ∧ R (tl xs) (tl ys))) → (∀ [...]Fri, 23 Jun 2017 16:54:36 -0700
Yesterday, I was giving a talk at the The South SF Bay Haskell User Group about how implementing lock-step simulation is trivial in Haskell and how Chris Smith and me are using this to make CodeWorld even more attractive to students. I gave the talk before, at Compose::Conference in New York City earlier this year, so I felt well prepared. On the flight to the West Coast I slightly extended the slides, and as I was too cheap to buy in-flight WiFi, I tested them only locally.
So I arrived at the offices of Target^{1} in Sunnyvale, got on the WiFi, uploaded my slides, which are in fact one large interactive CodeWorld program, and tried to run it. But I got a type error…
Turns out that the API of CodeWorld was changed just the day before:
commit 054c811b494746ec7304c3d495675046727ab114
Author: Chris Smith
Date: Wed Jun 21 23:53:53 2017 +0000
Change dilated to take one parameter.
Function is nearly unused, so I'm not concerned about breakage.
This new version better aligns with standard educational usage,
in which "dilation" means uniform scaling. Taken as a separate
operation, it commutes with rotation, and preserves similarity
of shapes, neither of which is true of scaling in general.
Ok, that was quick to fix, and the CodeWorld server started to compile my code, and compiled, and aborted. It turned out that my program, presumably the larges CodeWorld interaction out there, hit the time limit of the compiler.
Luckily, Chris Smith just arrived at the venue, and he emergency-bumped the compiler time limit. The program compiled and I could start my presentation.
Unfortunately, the biggest blunder was still awaiting for me. I came to the slide where two instances of pong are played over a simulated network, and my point was that the two instances are perfectly in sync. Unfortunately, they were not. I guess it did support my point that lock-step simulation can easily go wrong, but it really left me out in the rain there, and I could not explain it – I did not modify this code since New York, and there it worked flawless^{2}. In the end, I could save my face a bit by running the real pong game against an attendee over the network, and no desynchronisation could be observed there.
Today I dug into it and it took me a while, and it turned out that the problem was not in CodeWorld, or the lock-step simulation code discussed in our paper about it, but in the code in my presentation that simulated the delayed network messages; in some instances it would deliver the UI events in different order to the two simulated players, and hence cause them do something different. Phew.
Thu, 27 Apr 2017 23:11:38 -0400
A few weeks ago I blogged about an experiment of mine, where I proved equalities of Haskell programs by (ab)using the GHC simplifier. For more details, please see that post, or the video of my talk at the Zürich Haskell User Group, but one reason why this approach has any chance of being useful is the compiler’s support for rewrite rules. Rewrite rules are program equations that the programmer specifies in the source file, and which the compiler then applies, from left to right, whenever some intermediate code matches the left-hand side of the equation. One such rule, for example, is {-# RULES "foldr/nil" forall k z. foldr k z [] = z #-} taken right out of the standard library. In my blog post I went through the algebraic laws that a small library of mine, successors, should fulfill, and sure enough, once I got to more interesting proofs, they would not go through just like that. At that point I had to add additional rules to the file I was editing, which helped the compiler to finish the proofs. Some of these rules were simple like {-# RULES "mapFB/id" forall c. mapFB c (\x -> x) = c #-} {-# RULES "foldr/nil" forall k n. GHC.Base.foldr k n [] = n #-} {-# RULES "foldr/undo" forall xs. GHC.Base.foldr (:) [] xs = xs #-} and some are more intricate like {-# RULES "foldr/mapFB" forall c f g n1 n2 xs. GHC.Base.foldr (mapFB c f) n1 (GHC.Base.foldr (mapFB (:) g) n2 xs) = GHC.Base.foldr (mapFB c (f.g)) (GHC.Base.foldr (mapFB c f) n1 n2) xs #-} But there is something fishy going on here: The foldr/nil rule is identical to a rule in the standard library! I should not have to add to my file that as I am proving things. So I knew that the GHC plugin, which I wrote to do these proofs, was doing something wrong, but I did not investigate for a while. I returned to this problem recetly, and with the efficient and quick help of Simon Peyton Jones, I learned what I was doing wrong.1 After fixing it, I could remove all the simple rules from the files with my proofs. And to my surprise, I could remove the intricate rule as well! So with this bug fixed, ghc-proofs is able to prove all the Functor, Applicative and Monad rules of the Succs functor without any additional rewrite rules, as you can see in the example file! (I still have to strategically place seqs in a few places.) That’s great, isn’t it! Yeah, sure. But having to introduce the rules at that point provided a very good narrative in my talk, so when I will give a similar talk next week in Pairs (actually, twice, first at the university and then at the Paris Haskell meetup, I will have to come up with a different example that calls for additional rewrite rules. In related news: Since the last blog post, ghc-proofs learned to interpret proof specifications like applicative_law4 :: Succs (a -> b) -> a -> Proof applicative_law4 u y = u <*> pure y === pure ($ y) <*> u where it previously only understood applicative_law4 = (\ u (y::a) -> u <*> (pure y :: Succs a)) === (\ u (y::a) -> pure ($ y) <*> u) I am not sur if this should be uploaded to Hackage, but I invite you to play around with the GitHub version of ghc-proofs. In short: I did not initialize the simplifier with the right InScopeSet, so RULES about functions defined in the current module were not always applied, and I did not feed the eps_rules to the simplifier, which contains all the rules found in imported packages, such as base.↩ [...]Fri, 21 Apr 2017 11:30:27 -0400
How hard it is to write a compiler for Haskell Core? Not too hard, actually! I wish we had a formally verified compiler for Haskell, or at least for GHC’s intermediate language Core. Now formalizing that part of GHC itself seems to be far out of reach, with the many phases the code goes through (Core to STG to CMM to Assembly or LLVM) and optimizations happening at all of these phases and the many complicated details to the highly tuned GHC runtime (pointer tagging, support for concurrency and garbage collection). Introducing Veggies So to make that goal of a formally verified compiler more feasible, I set out and implemented code generation from GHC’s intermediate language Core to LLVM IR, with simplicity as the main design driving factor. You can find the result in the GitHub repository of veggies (the name derives from “verifiable GHC”). If you clone that and run ./boot.sh some-directory, you will find that you can use the program some-directory/bin/veggies just like like you would use ghc. It comes with the full base library, so your favorite variant of HelloWorld might just compile and run. As of now, the code generation handles all the Core constructs (which is easy when you simply ignore all the types). It supports a good number of primitive operations, including pointers and arrays – I implement these as need – and has support for FFI calls into C. Why you don't want to use Veggies Since the code generator was written with simplicity in mind, performance of the resulting code is abysmal: Everything is boxed, i.e. represented as pointer to some heap-allocated data, including “unboxed” integer values and “unboxed” tuples. This is very uniform and simplifies the code, but it is also slow, and because there is no garbage collection (and probably never will be for this project), will fill up your memory quickly. Also, the code is currently only supports 64bit architectures, and this is hard-coded in many places. There is no support for concurrency. Why it might be interesting to you nevertheless So if it is not really usable to run programs with, should you care about it? Probably not, but maybe you do for one of these reasons: You always wondered how a compiler for Haskell actually works, and reading through a little over a thousands lines of code is less daunting than reading through the 34k lines of code that is GHC’s backend. You have wacky ideas about Code generation for Haskell that you want to experiment with. You have wacky ideas about Haskell that require special support in the backend, and want to prototype that. You want to see how I use the GHC API to provide a ghc-like experience. (I copied GHC’s Main.hs and inserted a few hooks, an approach I copied from GHCJS). You want to learn about running Haskell programs efficiently, and starting from veggies, you can implement all the trick of the trade yourself and enjoy observing the speed-ups you get. You want to compile Haskell code to some weird platform that is supported by LLVM, but where you for some reason cannot run GHC’s runtime. (Because there are no threads and no garbage collection, the code generated by veggies does not require a runtime system.) You want to formally verify Haskell code generation. Note that the code generator targets the same AST for LLVM IR that the vellvm2 project uses, so eventually, veggies can become a verified arrow in the top right corner map of the DeepSpec project. So feel free to play around with veggies, and report any issues you have on the GitHub repository.[...]Mon, 06 Feb 2017 19:38:57 -0500
Last week, while working on CodeWorld, via a sequence of yak shavings, I ended up creating a nicely small library that provides Control.Applicative.Succs, a new applicative functor. And because I am trying to keep my Haskell karma good1, I wanted to actually prove that my code fulfills the Applicative and Monad laws. This led me to inserted writing long comments into my code, filled with lines like this: The second Applicative law: pure (.) <*> Succs u us <*> Succs v vs <*> Succs w ws = Succs (.) [] <*> Succs u us <*> Succs v vs <*> Succs w ws = Succs (u .) (map (.) us) <*> Succs v vs <*> Succs w ws = Succs (u . v) (map ($v) (map (.) us) ++ map (u .) vs) <*> Succs w ws = Succs (u . v) (map (($v).(.)) us ++ map (u .) vs) <*> Succs w ws = Succs ((u . v) w) (map ($w) (map (($v).(.)) us ++ map (u .) vs) ++ map (u.v) ws) = Succs ((u . v) w) (map (($w).($v).(.)) us ++ map (($w).(u.)) vs ++ map (u.v) ws) = Succs (u (v w)) (map (\u -> u (v w)) us ++ map (\v -> u (v w)) vs ++ map (\w -> u (v w)) ws) = Succs (u (v w)) (map ($(v w)) us ++ map u (map ($w) vs ++ map v ws)) = Succs u us <*> Succs (v w) (map ($w) vs ++ map v ws) = Succs u us <*> (Succs v vs <*> Succs w ws) Honk if you have done something like this before! I proved all the laws, but I was very unhappy. I have a PhD on something about Haskell and theorem proving. I have worked with Isabelle, Agda and Coq. Both Haskell and theorem proving is decades old. And yet, I sit here, and tediously write manual proofs by hand. Is this really the best we can do? Of course I could have taken my code, rewritten it in, say, Agda, and proved it correct there. But (right now) I don’t care about Agda code. I care about my Haskell code! I don’t want to write it twice, worry about copying mistakes and mismatchs in semantics, and have external proofs to maintain. Instead, I want to prove where I code, and have the proofs checked together with my code! Then it dawned to me that this is, to some extent, possible. The Haskell compiler comes with a sophisticated program transformation machinery, which is meant to simplify and optimize code. But it can also be used to prove Haskell expressions to be equivalent! The idea is simple: Take two expressions, run both through the compiler’s simplifier, and check if the results are the same. If they are, then the expressions are, as far as the compiler is concerned, equivalent. A handful of hours later, I was able to write proof tasks like app_law_2 = (\ a b (c::Succs a) -> pure (.) <*> a <*> b <*> c) === (\ a b c -> a <*> (b <*> c)) and others into my source file, and the compiler would tell me happily: [1 of 1] Compiling Successors ( Successors.hs, Successors.o ) GHC.Proof: Proving getCurrent_proof1 … GHC.Proof: Proving getCurrent_proof2 … GHC.Proof: Proving getCurrent_proof3 … GHC.Proof: Proving ap_star … GHC.Proof: Proving getSuccs_proof1 … GHC.Proof: Proving getSuccs_proof2 … GHC.Proof: Proving getSuccs_proof3 … GHC.Proof: Proving app_law_1 … GHC.Proof: Proving app_law_2 … GHC.Proof: Proving app_law_3 … GHC.Proof: Proving app_law_4 … GHC.Proof: Proving monad_law_1 … GHC.Proof: Proving monad_law_2 … GHC.Proof: Proving monad_law_3 … GHC.Proof: Proving return_pure … GHC.Proof proved 15 equalities This is how I want to prove stuff about my code! Do you also want to prove stuff about your code? I packaged this up as a GHC plugin in the Haskell library ghc-proofs (not yet on Hackage). The README of the repository has a bit more detail on how to use this plugin, how it works, what its limitations are and where this is heading. This is still only a small step, but finally there is a step towards low threshold program equivalence proofs in Haskell. Or rather recover my karma after such abominations such as gh[...]Fri, 20 Jan 2017 13:03:17 -0500
More than five years ago I blogged about the “configuration problem” and a proposed solution for Haskell, which turned into some Template Haskell hacks in the seal-module package.
With the new GHC proposal process in plase, I am suddenly much more inclined to write up my weird wishes for the Haskell language in proposal form, to make them more coherent, get feedback, and maybe (maybe) actually get them implemented. But even if the proposal is rejected it is still a nice forum to discuss these ideas.
So I turned my Template Haskell hack into a proposed new syntactic feature. The idea is shamelessly stolen from Isabelle, including some of the keywords, and would allow you to write
context fixes progName in
foo :: Maybe Int -> Either String Int
foo Nothing = Left $ progName ++ ": no number given"
foo (Just i) = bar i
bar :: Int -> Either String Int
bar 0 = Left $ progName ++ ": zero no good"
bar n = Right $ n + 1
instead of
foo :: String -> Maybe Int -> Either String Int
foo progName Nothing = Left $ progName ++ ": no number given"
foo progName (Just i) = bar progName i
bar :: String -> Int -> Either String Int
bar progName 0 = Left $ progName ++ ": zero no good"
bar progName n = Right $ n + 1
when you want to have an “almost constant” parameter.
I am happy to get feedback at the GitHub pull request.
Wed, 26 Oct 2016 00:00:00 -0400
My plan for this week’s lecture of the CIS 194 Haskell course at the University of Pennsylvania is to dwell a bit on the concept of Functor, Applicative and Monad, and to highlight the value of the Applicative abstraction. I quite like the example that I came up with, so I want to share it here. In the interest of long-term archival and stand-alone presentation, I include all the material in this post.1 Imports In case you want to follow along, start with these imports: import Data.Char import Data.Maybe import Data.List import System.Environment import System.IO import System.Exit The parser The starting point for this exercise is a fairly standard parser-combinator monad, which happens to be the result of the student’s homework from last week: newtype Parser a = P (String -> Maybe (a, String)) runParser :: Parser t -> String -> Maybe (t, String) runParser (P p) = p parse :: Parser a -> String -> Maybe a parse p input = case runParser p input of Just (result, "") -> Just result _ -> Nothing -- handles both no result and leftover input noParserP :: Parser a noParserP = P (\_ -> Nothing) pureParserP :: a -> Parser a pureParserP x = P (\input -> Just (x,input)) instance Functor Parser where fmap f p = P $ \input -> do (x, rest) <- runParser p input return (f x, rest) instance Applicative Parser where pure = pureParserP p1 <*> p2 = P $ \input -> do (f, rest1) <- runParser p1 input (x, rest2) <- runParser p2 rest1 return (f x, rest2) instance Monad Parser where return = pure p1 >>= k = P $ \input -> do (x, rest1) <- runParser p1 input runParser (k x) rest1 anyCharP :: Parser Char anyCharP = P $ \input -> case input of (c:rest) -> Just (c, rest) [] -> Nothing charP :: Char -> Parser () charP c = do c' <- anyCharP if c == c' then return () else noParserP anyCharButP :: Char -> Parser Char anyCharButP c = do c' <- anyCharP if c /= c' then return c' else noParserP letterOrDigitP :: Parser Char letterOrDigitP = do c <- anyCharP if isAlphaNum c then return c else noParserP orElseP :: Parser a -> Parser a -> Parser a orElseP p1 p2 = P $ \input -> case runParser p1 input of Just r -> Just r Nothing -> runParser p2 input manyP :: Parser a -> Parser [a] manyP p = (pure (:) <*> p <*> manyP p) `orElseP` pure [] many1P :: Parser a -> Parser [a] many1P p = pure (:) <*> p <*> manyP p sepByP :: Parser a -> Parser () -> Parser [a] sepByP p1 p2 = (pure (:) <*> p1 <*> (manyP (p2 *> p1))) `orElseP` pure [] A parser using this library for, for example, CSV files could take this form: parseCSVP :: Parser [[String]] parseCSVP = manyP parseLine where parseLine = parseCell `sepByP` charP ',' <* charP '\n' parseCell = do charP '"' content <- manyP (anyCharButP '"') charP '"' return content We want EBNF Often when we write a parser for a file format, we might also want to have a formal specification of the format. A common form for such a specification is EBNF. This might look as follows, for a CSV file: cell = '"', {not-quote}, '"'; line = (cell, {',', cell} | ''), newline; csv = {line}; It is straightforward to create a Haskell data type to represent an EBNF syntax description. Here is a simple EBNF library (data type and pretty-printer) for your convenience: data RHS = Terminal String | NonTerminal String | Choice RHS RHS | Sequence RHS RHS | Optional RHS | Repetition RHS deriving (Show, Eq) ppRHS :: RHS -> String ppRHS = go 0 where go _ (Terminal s) = surround "'" "'" $ concatMap quote s go _ (NonTerminal s) = s go a (Choice x1 x2) = p a 1 $ go 1 x1 ++ " | " ++ go [...]Mon, 05 Sep 2016 14:09:41 -0400
The Haskell minicourse at the University of Pennsylvania, also known as CIS-194, has always had a reach beyond the students of Penn. At least since Brent Yorgey gave the course in 2013, who wrote extensive lecture notes and eventually put the material on Github.
This year, it is my turn to give the course. I could not resist making some changes, at least to the first few weeks: Instead of starting with a locally installed compiler, doing execises that revolve mostly around arithmetic and lists, I send my students to CodeWorld, which is a web programming environment created by Chris Smith^{1}.
This greatly lowers the initial hurlde of having to set up the local toolchain, and is inclusive towards those who have had little expose to the command line before. Not that I do not expect my students to handle that, but it does not hurt to move that towards later in the course.
But more importantly: CodeWorld comes with a nicely designed simple API to create vector graphics, to animate these graphics and even create interactive programs. This means that instead of having to come up with yet another set of exercieses revolving around lists and numbers, I can have the students create Haskell programs that are visual. I believe that this is more motivating and stimulating, and will nudge the students to spend more time programming and thus practicing.
width="400" style="display: block; margin-left: auto; margin-right: auto" height="400" src="https://code.world/run.html?mode=haskell&dhash=D7Y0vPUj4D2tE2iKoYnvZrg">In fact, the goal is that in their third homework assignemnt, the students will implement a fully functional, interactive Sokoban game. And all that before talking about the built-in lists or tuples, just with higher order functions and custom datatypes. (Click on the picture above, which is part of the second weeks’s homework. You can use the arrow keys to move the figure around and press the escape key to reset the game. Boxes cannot be moved yet -- that will be part of homework 3.)
If this sounds interesting to you, and you always wanted to learn Haskell from scratch, feel free to tag along. The lecture notes should be elaborate enough to learn from that, and with the homework problems, you should be able to tell whether you have solved it yourself. Just do not publish your solutions before the due date. Let me know if you have any comments about the course so far.
Eventually, I will move to local compilation, use of the interpreter and text-based IO and then start using more of the material of previous iterations of the course, which were held by Richard Eisenberg in 2014 and by Noam Zilberstein in 2015.
Chris has been very helpful in making sure CodeWorld works in a way that suits my course, thanks for that!↩
Tue, 30 Aug 2016 09:35:18 -0400
Chris Done’s automatic Haskell formatter hindent is released in a new version, and getting quite a bit of deserved attention. He is polling the Haskell programmers on whether two or four spaces are the right indentation. But that is just cosmetics… I am in principle very much in favor of automatic formatting, and I hope that a tool like hindent will eventually be better at formatting code than a human. But it currently is not there yet. Code is literature meant to be read, and good code goes at length to be easily readable, and formatting can carry semantic information. The Haskell syntax was (at least I get that impression) designed to allow the authors to write nicely looking, easy to understand code. One important tool here is vertical alignment of corresponding concepts on different lines. Compare maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2 with maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2 The former is a quick to grasp specification, the latter (the output of hindent at the moment) is a desert of numbers and operators. I see two ways forward: Tools like hindent get improved to the point that they are able to detect such patterns, and indent it properly (which would be great, but very tricky, and probably never complete) or We give the user a way to indicate intentional alignment in a non-obtrusive way that gets detected and preserved by the tool. What could such ways be? For guards, it could simply detect that within one function definitions, there are multiple | on the same column, and keep them aligned. More general, one could take the approach lhs2Tex (which, IMHO, with careful input, a proportional font and with the great polytable LaTeX backend, produces the most pleasing code listings) takes. There, two spaces or more indicate an alignment point, and if two such alignment points are in the same column, their alignment is preserved – even if there are lines in between! With the latter approach, the code up there would be written maze :: Integer -> Integer -> Integer maze x y | abs x > 4 || abs y > 4 = 0 | abs x == 4 || abs y == 4 = 1 | x == 2 && y <= 0 = 1 | x == 3 && y <= 0 = 3 | x >= -2 && y == 0 = 4 | otherwise = 2 And now the intended alignment is explicit. (This post is cross-posted on reddit.) Update (2016-09-05) Shortly after this post, the Haskell formatter brittany gets released, which supports vertial alignment. Yay![...]Tue, 05 Jul 2016 19:40:11 +0200
There is this long-running workshop series Haskell in Leipzig, which is a meeting of all kinds of Haskell-interested folks (beginners, experts, developers, scientists), and for year’s instance, HaL 2016, I have the honour of being the program committee chair.
The original deadline passed last week, and after looking through the submissions it became clear that although the quality was good, the quantitiy was still lacking. I therefore extended the submission deadline by two weeks, until July 15th.
So if you have something worth talking about, please do submit a proposal^{1} and come to Leipzig!.
Why should you submit here? Because it gives you a platform to talk about your ideas to an audience that usually does not consist just of your fellow speakers, as it is often with purely academic workshops, but “real” listeners of various kinds. And it is a fun event.
And why should you come to Leipzig? Because of all the interesting talks and tutorials! Of course I cannot say a lot here yet, besides that our invited speaker Alejandro Russo from Chalmers and Gothenburg University will present his work on information-flow control in Haskell (i.e., SecLib, LIO, MAC, HLIO).
And if you want to save me from sleepless nights, submit the first version a few days before the deadline…↩