Xmonad and KDE on kubuntu feisty

I've not been spending nearly as much time as I'd like to on #haskell recently, so I almost managed to completely miss the xmonad phenomenon. Luckily for me, dons was good enough to write about it, so I haven't missed out completely.

Last year, I spent a few months running wmii - which was fantastic. Since then I've been playing around with Gnome and KDE, which have advantages like konqueror (as a file manager - I prefer FF as a browser), amarok and knetworkmanager. I have been kinda missing the dynamic window management model though - so obviously I wanted to try out xmonad.

Collapse )

Collapse )

Collapse )

"Thinking in types" or "The mind of a programmer"

In a previous post, pozorvlak and I wondered about the differences between the thought processes that goes into writing good static code, and those that go into good dynamic code. We figured that there wasn't a lot out there to help dynamic programmers get the hang of static style thinking, so what follows is a simple little toy example, solved in what I think is probably a fairly static typey style.

This is not a beginners haskell tutorial - I'm going to assume that you know about lists, higher order functions and fold in particular. This is for people that understand functional programming, but are frustrated by strongly statically typed functional programming. Actually, if there's anyone other than pozorvlak reading this who's in that position, I'd appreciate a comment - I've no idea how many of you exist, or if this is the sort of thing that helps.

Collapse ) If you've been coding in the same paradigm (scripts, OO, whatever) for a long time, then I imagine this thought process will seem like an awful lot of effort, for very little result. I don't think it's a lot of effort though - if I hadn't made the decision to self-analyse while I was writing this little function, I'd have done it in a few seconds without thinking about it. I think that all programmers do a comparable amount of work whenever they write a little function or procedure like this, but it's all subconscious, and we rarely talk or even think about it.

I'd be interested to see how other people solve similarly simple problems. I'm not interested in the solution itself, I'm interested in the thoughts that lead you to that solution. How do the rest of you do it?

Development With Types

Lots of folk seem to agree that one of the nice things about having a blog is that you can use it as a public whiteboard, to help order your thoughts on any given subject. I don't post all that often myself, but this isn't because my thoughts are already ordered. It's because I've long been in the habit of using a different technology for ordering my thoughts - and when it comes to the sort of geeky code philosophy that might be most at home here, that technology is often pozorvlak.

Every so often, he'll encourage me to blog something from one of our email conversations and invite comment - and I should do so far more often than I do. I don't think he'd object to my describing him, for the purposes of this post, as a "recovering perl hacker". That is to say that he's fairly happy with the sensibilities of the perl hacker, and using them to produce Stuff. He's also aware that there are other ways of doing things, and wants to understand them. One of these ways is embodied by the Haskell language and community. The relationship between the language, the community and the development style are a frequent topic of our emails. I'll open then, by quoting a question he asked in a recent email:

...this led me to a question that I've been wondering about for a while: various Haskell bods (Cale, Duncan, YAHT, etc) claim that Haskell code, once it compiles, is typically bug-free. YAHT says "most of the code I've written is not buggy, as most of the common sources of bugs in other languages simply don't exist in Haskell."

This is not at all my experience. I haven't measured it properly or anything, but it feels like my Haskell code is approximately as buggy as my Perl code, with the added downsides that it takes a lot longer to get to the stage where you can test it, and a lot longer to find the bugs once you can :-( [Must get round to trying Hat.] But things like off-by-one-errors, infinite loops, and good old "not understanding the problem properly and writing the wrong thing" are all possible in Haskell, as I've discovered to my cost.

There are two ways I can see to interpret their statements:

  1. They're talking about things like memory leaks, null pointer bugs, bugs caused by looping over array indices, etc, that don't exist in Perl either, but do exist in C.

  2. They're using l33t guarantee-oriented sk1llz to prove correctness of their code using the type system.

Do you know which one it is?

Now, I certainly think there's a fair bit of (1) that goes on (for example, using the Maybe monad to solve the strings problem), but I'm interested here in what pozorvlak perceives as (2). In short, I don't think anyone's seriously going round proving all their code, using the type system or otherwise - but I do think he may have spotted a difference in the typical approach to a problem between average perl and haskell advocates.

What follows is pretty well taken straight out of the reply I emailed back. In it, I sort of accidentally started using the word "Duncan" to mean "a mythical perfect haskell programmer". I hope he doesn't object.

Do you remember that Dijkstra rant we both read some time ago?

It included such gems as:

The nice thing of this simple change of vocabulary is that it has such a profound effect: while, before, a program with only one bug used to be "almost correct", afterwards a program with an error is just "wrong" (because in error).


Right from the beginning, and all through the course, we stress that the programmer's task is not just to write down a program, but that his main task is to give a formal proof that the program he proposes meets the equally formal functional specification. ... we see to it that the programming language in question has not been implemented on campus so that students are protected from the temptation to test their programs.

I think this might be an indication that Dijkstra writes assembly the way that Duncan writes haskell - and vice versa... I'm not sure - but I think there might be something in the notion.

Let's think about the two... In fact - I guess I'll start by thinking about Dijkstra, and wind back around to haskell in a second... When we read that rant for the first time (and exchanged emails about it), the impression I eventually came away with was that Dijkstra had an interesting idea, so far as it went. He would probably be able to produce a generation of students who could each take a formal specification of a problem (or a similarly clear exam-question style description of a problem), and produce correct code which satisfies it. This is a useful skill, but is only part of the job of the programmer, who also has to come up with that specification in the first place.

You mention three classes of bugs (off by one errors, infinite loops, and not really understanding the problem) in your email - it's the third of these that I wasn't convinced that Dijkstra's methods would deal with for us.

I am fairly convinced that the first two can be eliminated this way. I have an excellent book on my desk - a hang over from the Oxford course - called "Programming: The derivation of Algorithms" by Anne Kaldewaij. It shows how to take a certain sort of specification and, almost entirely mechanically, derive the programme from the specification. There are points of choice - of creativity along the way, but they're limited to high level things. Low level details like off-by-one errors are eliminated by the system, so long as you're able to follow it (it can't guard against typos). Termination proofs are part of the thing too, so you're not going to get infinite loops either.

I never actually write procedural code this way - it takes far too long. But I can see that a mechanical method exists which will eliminate those kinds of silly mistakes, and I guess I nod towards it when I write complex things (which isn't very often).

I'm willing to take it on faith that Dijkstra could produce coders who could code in a way similar to that - which eliminated those sorts of bugs, and he could probably have them turn out code at a fairly professional rate. I think many of these sorts of problems can be eliminated by certain sorts of high level language. I think the memory errors you mentioned, and the strings problem I mentioned are also in this class of errors. The difference between the solutions to those problems that we mentioned, and the solution that Dijkstra proposes, is that ours is packaged up - the proof of the java garbage collector is in the mind of some Sun engineer somewhere - and java programmers don't have to care. There doesn't happen to be a packaged up proof/method for the off by one error you came across, so the error caught you out. The advantage that Dijkstra's students would have, is that they'd be versed in that style from the beginning - they'd be able to use the garbage collector the same as you, but when they found a gap in the system (the space for an off-by-one bug), they'd be able to fill it in, and avoid the bug.

So, for that class of what I'll call "silly mistake" bugs, I think that haskell's probably better than perl, is better than C, is better than assembly. None are perfect, because none of them produce proofs automatically, but someone versed in that particular skill can fill in those gaps.

The interesting class of bugs is the "not understanding the problem" class. Having a specification eliminates this problem, so we can think of it as "producing a specification" or "describing the problem" or something like that. I know that Dijkstra has written big reliable code, so it seems to me that he has a solution to this problem, even if he doesn't bang on about it so much as he does about eliminating the "silly mistakes".

So, back to the point in hand - haskell and Duncan's programming style. I suspect that both Duncan and Dijkstra may have hit upon a similar solution to the specification problem, namely, to explore the problem in a different intellectual space than the runtime-inspired one that imperative programmers (apart from Dijkstra) tend to use. A portion of that space may well be type-system inspired for Duncan. It may be quite a large portion, but I don't think that's all of it. I suspect that when he's writing a programme, he thinks about what he wants it to guarantee first (as distinct from what he wants it to do - as you spotted), then tries to express a portion of that as a series of function types... The programme isn't even in his head yet.. The specification (probably part formed at this stage) is in his head, and the silhouette of the specification on the type system is on the screen. Then, the interesting bit - the agile methods style cycle of prototyping, testing and refining the spec can happen using the type system and Duncan's mind. He's practised at knowing what can and can't be easily expressed in the type system, so he doesn't waste time trying to express things that aren't easily expressible - but there's enough of a silhouette for him to get a tangible grip on the programme. Dijkstra, of course, doesn't need this - he's a maths god, and can do this stage all in his head.

Once the types of his functions are all written down, I'd guess that the spec is clear enough to him, that he can spend all his effort now on eliminating the "silly mistake" type bugs. He still hasn't written any code, but he already knows what the shadow of the code looks like, so he's free to concentrate on the detail as he refines the specification in his head to code on the screen. If he's chosen his types carefully, many silly mistakes will be eliminated by the type system. And since he's been thinking about the relationship between the spec, and the types he's written down, it'll be more obvious to him what guarantees he needs that the type system can't give him - so he'll be watching for those mistakes as he writes the function bodies, and possibly coming up with other abstractions to help guard against them on the way.

So, I guess an advantage of this kind of method would be that one separates out the problem of understanding the problem, from the problem of silly mistakes in implementation... Tackle the first whole heartedly first, then you're free to tackle as much of the second as can't be done mechanically already with your full attention.

Does that make any sense? It's kind of a stream of thought, and I'm probably projecting a whole load of random junk onto Duncan's image - but it seems plausible to me that such a theoretical Duncan might exist.

Perhaps I/we should send him a copy of this, and ask how close it comes to what he does ;)

Techno trousers!

I'm amazed I haven't been following this for some time. I mean, how could I fail to notice someone building working strength enhancing robot exo-suits? But no - the first I heard about it (about this team - I knew vaguely about some US folk having battery issues) was when I saw the BBC video yesterday.

It's a whole body integrated suit, with a battery life of about 2 hours and 40 minutes. It uses skin patches to listen for nerve impulses, and follows your intended movements. It doesn't look all that bulky, and it seems pretty strong (the prof says in the video it can make you "like 5 times stronger"). The researchers who came up with it are these guys, and they have an ominously named spin off company to market the thing :)

Looks to me like a much cooler alternative to walking with a stick for the arthritic.

Editor agnosticism in haskell examples

pozorvlak recently discovered a bit of a gotcha in writing haskell examples:

If I'm in emacs, and I write this:
tick = do n <- get; put (n+1); return n;

on three lines, it comes out like this:
tick = do n <- get
          put (n+1)
          return n

This is because emacs has a particularly smart indent mode - it notices the "do", and realises that I want the next line to line up with the first word after the "do" on the previous line.

Vim doesn't do this. If I write it (using almost exactly the same keypresses) in vim, I get this:

tick = do n <- get
        put (n+1)
        return n

That's because vim doesn't understand the "do". It just knows that I've hit the tab key, so I want it more indented than it was before. Note that there are no literal tabs in there (which I think we're probably all agreed would be bad) - it's just that there aren't the right number of spaces. The "put" doesn't line up with the "n <-", so haskell thinks that the "n <-" was the last statement in the do block. Which is illegal, so it won't compile.

This could be particularly frustrating for a beginner (who might be using notepad for all we know), trying to copy a worked example from a web page.

If anyone's out there writing docs for haskell libraries, or tutorials, or whatever, I would like to suggest the following form:

tick = do 
        n <- get
        put (n+1)
        return n

Now it doesn't matter how much you indent your do block - it's obvious to all concerned that the "n <-" should line up with the "put" - so it'll compile first time - whichever text editor you type it in :)

Of course, it may be that all the experienced haskellers out there are already doing this - but I didn't know about it, so I figured it couldn't hurt to mention it again.

Quick musing...

We all know and love Lazy Evaluation[1]. Among other things, it allows us to do cool stuff like optimize away benchmarks. It occurred to me today though, that the problem with laziness in life in general is that sometimes you get hit with 12 deadlines at once and wish you'd previously been a bit more eager. So I wonder if anything yet exists which might be called "opportunistic evaluation"?

Such a thing could be thought of as a next logical step for giving lazy evaluation systems a kind of social conscience. A system running a single (potentially) lazy programme might be given a regular update of what resources are up for grabs. When the system's doing nothing else (particularly in a very parallel system if there are lots of free thread slots), it could go hyper-eager, speculatively calculating values which might be required depending on the future flow of the programme. When the system's very busy it could fall back to maximum laziness (reaping the harvest of the eager work that went before) until the spike has passed.

I suppose this is sort of doable (or simulable) if you have many, many small unix progs, piped together (for laziness) and appropriately niced... But even that (increasingly unlikely) scenario doesn't allow for speculative evaluation while (for example) waiting for IO.

At the moment, I'm thinking that the quickest route to such a system would involve some OS integration (either hacked into linux, or possibly built in to some sort of research OS), and some of the tricks of the partial evaluation crowd. But you might know better :) Is there another route? Has someone else already built it? I'd appreciate any links left in the comments :)

[1] I'd also like to be able to hyperlink to two places at once - perhaps your browser would give you a little context menu when you click on the multi-link or something.

How to start a flame war...

A few posts back, I mentioned that I was going to try switching from vim to emacs again (editor switching being something I do on a regular basis for a variety of reasons), so as to have a go at a new collaborative editing gadget.

Well, I haven't managed that yet, but I have finally got the latest gnu emacs cvs snapshot running at college, and discovered another cool thing I'd previously been unaware of:

preview-latex mode, in combination with AUCTeX is making mathematical document generation so much easier for me now than when I was using vim - it's unbelievable. Recently, I've been generally of the opinion that the two editors were pretty well as functional as one another (at least if you allow vim to delegate elisp-like stuff to the shell, and hence your scripting language of choice), but I'd not seen anything like this in vim... Is there actually a feature-difference in these editors after all?

Well, not really - vim can't do the funky previews, but it does do a whole load of other cool stuff that I hadn't noticed, and wasn't using. It looks scarily like emacs and vim are actually constructively competing with each other - which is nice :)

The moral for me, I guess, is that I should take even more time to read about cool things that might enhance my productivity - because they're appearing at a rate of knots ;)

I woke up this morning to discover that my name had been mentioned in the #haskell channel while I was asleep - I think of myself as a lurker in there, so this surprised me. Checking my scrollback, it was dons asking the lambdabot when was the last time it had seen me..

13:58 < gds> dons: You "@seen"ed ?
13:58 < dons> gds, ah let me think
13:58 < gds> (morning all ;) )
13:58 < dons> gds, I think it was to ask if you wanted to add your blog to ?

So, yay :) Hello planet.haskell :)

Static typing...

Just a quick note about my last post... This fits somewhere into the great static vs dynamic typing war, and doesn't aim to be comprehensive - it's just one piece of ammo for one side.

Writing this took a lot longer than I expected. At the time, it felt like I was fighting haskell's type system - which is something I've heard a lot of people complain about. I'm interested in the issues around the usefulness of typing, so I paid attention to my thoughts for later reference.

One of the tools I was using to write that little snippet was the lambdabot - it provides a number of useful functions like generating point free code from arbitrary functions, and performing type inference and code evaluation. It's amazing how productive IRC can be as a development environment :)

Back to the point - I was fighting the type system. The error I kept getting was:
    Couldn't match `[[Char]] -> [[Char]]' against `IO String'
      Expected type: FilePath -> [[Char]] -> [[Char]]
      Inferred type: FilePath -> IO String
    In the expression: ((hGetContents =<<)) . (flip openFile ReadMode)
    In the first argument of `map', namely
        `(((hGetContents =<<)) . (flip openFile ReadMode))'

This didn't feel very helpful to me - and it'll feel even less helpful to you, since you don't have the code that generated that error. You just have the working code I eventually produced when I figured out what was going on.

I haven't given you the code that produced the error yet, because with the correct code to hand, the error in the broken stuff is obvious. And the point I want to make is that it's a real error - I'd misunderstood what one of the functions I wanted to use did. Not a complex, poorly documented function, but a (well actually a pair of) well written, basic function(s) that I'm very familiar with. It was basically a thinko, I guess - but one that it took me a while to spot. Luckily for me, the type checker spotted it before I tried to even run the code. I suspect it's arguable whether a dynamic system with a bit of testing would have produced a more readable error message, but in this instance, I suspect not.

I guess I'll tell you the error now. In the working code, I do a foldr using the function liftM2 (++) - the error was caused by my thinking that (=<<) (++) meant the same thing. Of course, I didn't know about the function liftM2 at the time (and how I discovered it is an excellent advert for the powers of the lambdabot, but is also, unfortunately, beyond the scope at the moment) - but that was my error. I thought that (=<<) (++) did what liftM2 (++) actually does.

Why did I think such a silly thing in the first place? Well, I'd been prototyping parts of my function using the lambdabot. I'd done the following in IRC:
14:55        gds  : > foldr ((=<<) (++)) "" [return "Hello ", return "World"]
14:55  lambdabot  :  "Hello World"

Fantastic! It does exactly what I want :)

Actually, it wasn't doing exactly what I wanted - I'd failed to specify that the list should be a list of IO String. The "return" statements and the literal strings allowed the interpreter to infer that they should be M Strings for some monad M, but in this instance, they could be for any monad at all. The monad it found, which my erroneous function could deal with was the Reader Monad.

In my safe little testing environment, the bug could be worked around by a dynamic system choosing to evaluate the code using the Reader Monad. But once the code was deployed, it would be forced into using the IO Monad. If my code hadn't been statically checked, I believe it would have died at runtime with an error approximating to "IO does not equal Reader". I think that this is about as useful as the error message I did get - except it could well have been too late.

So, all that time I thought I was fighting the type system, it turns out I was actually fixing a real runtime bug. How about that?

Like I said up at the top, this isn't comprehensive or balanced. This is a single anecdote out of the uncountable number out there (some unknowable percentage of which are for each side), so you have to weight it according to your own prejudices and utility functions and see if it affects your existing working conclusion.

For me, though - it was an experience, not an anecdote. One in which a static typechecker actively stopped me from delivering (in this case to an audience rather than a client) broken code.

Haskell for reformed perlmongers

Do you miss being able to type while (<>) { ... } ? If so, just slap the following in your local handyfunctions library:

import System
import IO
import Monad

perlGetContents :: IO String
perlGetContents = do 
    args <- getArgs
    if (args == []) then getContents
        else foldr (liftM2 (++)) (return "") $ 
             map ((hGetContents =<<) . flip openFile ReadMode) args

Now you can do things like:
cat "main = putStr =<< perlGetContents" > cat.hs


UPDATE: Added a linebreak for all those narrow people out there ;)