Custom Search

Thursday, October 16, 2014

Extending the behavior of XMonad Layouts

Introduction

This article is about three things I'm very interested in. I've been a fan of using real programming languages for configuration files for a long time, but haven't written about that recently. I've been using tiling window managers - now in their dynamic version - for a long time as well, and have written about that. Finally, I've been a fan of Haskell for a while, and have written a number of articles about using it.

XMonad is a dynamic, tiling window manager written in Haskell that uses a Haskell module as a configuration file. This has the usual advantages of doing so - you can put values in variables rather than repeating them, and construct new values with Haskell expressions, etc.

One of the features of XMonad is a Layout, which controls how windows are tiled on the screen. The core of XMonad provides some basic - but very useful - Layouts, and there are extensions to do things like creating tabbed stacks of windows, nesting Layouts, etc.

Since Layouts control how windows are arranged, they are critical components, and changing them is how you change your window managers behavior. I'm going to look at extending the behavior of one of the core Layouts - Tall - in a number of ways.

And a credit. The code here was inspired by Devin Mullins, who provided information and code samples while helping me with my XMonad configuration.

LayoutClass

A Layout needs to be an instance of the LayoutClass type class. As such, aLayout needs to do three things: run the layout, handle Messages from the window manager, and optionally provide a description. You can find details on that in the API documentation.

Different description

description is "a human-readable string used for selecting Layout's." Some tools display them for selection, others use descriptions to select Layouts programmatically, say from a list of strings in the configuration. These different uses give rise to different needs, so we'll start by just changing it. This would allow us to have two different Tall layouts, and tell the difference between them.

First, we need to declare our data type:

data MyTall a = MyTall (Tall a) deriving (Show, Read)

Now we need to make this an instance of LayoutClass. Running will be forwarded to the wrapped Tall with the runLayout method. The same will be done with Messages by the pureMessage method. I'll get into the details of those later.

instance LayoutClass MyTall a where
  runLayout (W.Workspace id (MyTall tall) ms) r =
    fmap (second (fmap MyTall)) $ runLayout (W.Workspace id tall ms) r

  pureMessage (MyTall tall) m = fmap MyTall $ pureMessage tall m

And the critical part is to change the description:

  description _ = "MyTall"

So now we can create two different Tall Layouts with different names: one is a regular Tall layout, and the other a MyTall layout. Exactly how you do that will depend on your XMonad config file, but you would just add a Tall layout and wrap it in a MyTall like one of these examples:

MyTall (Tall *...*)
MyTall $ Tall *...*

Instead of just displaying the name, we could display information from the Layout. One of the features of Tall is a master pane, which holds a programmable number of client windows - typically the one or two you're working on now, with other windows dynamically sized in a second pane. The wrapped Tall has the format Tall n delta frac, where n is the number of clients in the master pane. We can put that count in the description like so:

  description (MyTall (Tall n _ _)) = "Tall " ++ show n

We can still have two Tall Layouts with different names, but now the second one is distinguished by having the number of client windows in the master pane displayed.

More Messages

Commands for a Layout are described by the Message type class. Tall has two messages, one to change the size of the master pane, and one to change the number of windows in it. I tend to use either one or two windows in the master pane, and would like the ability to toggle between those two states.

Toggling the master pane

So we'll create a new Message, ToggleMaster to toggle the number of clients in the master pane:

data ToggleMaster = ToggleMaster deriving Typeable
instance Message ToggleMaster

Now, we need to change the existing pureMessage method to handle this Message. Let's dissect the current version first:

pureMessage (MyTall tall) m = fmap MyTall $ pureMessage tall m

pureMessage gets a MyTall Layout and a SomeMessage, m. It returns a Maybe (layout a). Forwarding the message is easy - we just call pureMessage on the wrapped Tall, extracted by pattern matching in the function. The returned Maybe Tall needs to be rewrapped to a Maybe MyTall. fmap MyTall does that for us.

To handle the Message ourselves, we need to get the actual message from the SomeMessage, which fromMessage will do for us. If that returns Just ToggleMaster, then we want to handle this Message. Otherwise, it will return Nothing, and we pass the message as before. So far we have:

  pureMessage (MyTall tall) m =
    case fromMessage m of
      Nothing -> fmap MyTall $ pureMessage tall m
      Just ToggleMaster -> undefined

To handle the ToggleMaster message, we need to return a MyTall Tall where Tall has the new number of client windows we want in the master pane:

  pureMessage (MyTall tall@(Tall n delta frac)) m =
    case fromMessage m of
      Nothing -> fmap MyTall $ pureMessage tall m
      Just ToggleMaster -> Just . MyTall $ Tall new delta frac
        where new = if n /= 1 then 1 else 2

This uses pattern matching to get the values in the Tall. When we get a ToggleMaster Message, we create the new value if n /= 1 then 1 else 2 . While I usually toggle between 1 and 2 clients, it handles all other cases by going back to 1 as well. To finish this, we create a new Tall that we wrap with Just . MyTall.

Binding ToggleMaster

We can now bind that in our configuration with:

    , ((modm              , xK_slash), sendMessage ToggleMaster)

This uses Mod-slash to toggle the master window, which seems to work well with Mod-comma and Mod-period, the defaults for incrementing and decrementing the number of clients in the master pane.

Target Toggles

If you also used three client window regularly, you might want a separate toggle for that. We're going to do that in two different ways.

Extending ToggleMaster

First, we can simply give ToggleMaster an argument and a name change to match:

data ToggleMasterN = ToggleMasterN !Int deriving Typeable

And the changed bindings, including using backslash for the 3-way split:

    , ((modm              , xK_slash), sendMessage $ ToggleMasterN 2)

    -- Toggle the master window split 3-way.
    , ((modm              , xK_backslash), sendMessage $ ToggleMasterN 3)

And then we change the handling in pureMessage to use that argument instead of 2:

      Just (ToggleMasterN i) -> Just . MyTall $ Tall new delta frac
        where new = if n /= 1 then 1 else i

Splits as well.

But suppose we wanted a command that always split the master window, no matter what it currently was? Let's call it SetMasterN, and the code to handle it is pretty simple:

      Just (SetMasterN new) -> Just . MyTall $ Tall new delta frac

I'll omit the bindings. The new Message is similar to ToggleMasterN:

data SetMasterN = SetMasterN !Int deriving Typeable
instance Message SetMasterN

So all we have to do is get the code for SetMasterN to be run in pureMessage. We're going to refactor pureMessage a bit to do that:

  pureMessage (MyTall tall@(Tall n delta frac)) m =
    msum [fmap MyTall $ pureMessage tall m,
          fmap toggle  (fromMessage m),
          fmap set     (fromMessage m)]  
    where toggle ToggleMaster  = MyTall $ Tall new delta frac
                                 where new = if n /= 1 then 1 else 2
          set (SetMasterN new) = MyTall $ Tall new delta frac

Each element of the list passed to msum handles a different set of messages. The first line passes all of them to Tall, and it will return Nothing if it doesn't handle that message. Each element after that will pass the appropriate messages to the function that handles them, or be Nothing. msum then returns the Just value from the list.

handleMessage

As a final note, you can access the X monad values if you use handleMessage instead of pureMessage. This lets you access the XConf and XState values via ask and get.

It's type is

handleMessage :: layout a -> SomeMessage -> X (Maybe (layout a))

so requires another level of fmap calls to work with the values in the Layout. Details can be found in the API documentation.

Extending the Layout

The last set of functions in a LayoutClass are the ones that generate the rectangles that windows wind up in. At this point, you're really past extending the Layout, and are writing a new one. But I'm going to look at a simple case anyway.

The oddity of 0

One of the odder behaviors of the default Tall Layout is that putting all the windows in the master pane and putting none of the windows in the master pane gets the same layout. Both wind up with all windows stretching the width of the screen. It's just that in one, they're in the master pane, and in the other they're in the other pane.

The difference between them is that you can keep increasing the master pane count until you hit maxBound, but once you decrease it to 0, it won't go any lower. So you can use IncMasterN minBound to get to 0 from any value. Once you've got SetMasterN or ToggleMaster, you can use those instead, and use IncMasterN maxBound to get to the layout that was at 0. Which means 0 can be used for something else.

A full screen mode

Give the above, we can extend Tall so that putting 0 windows in the master pane makes the first window a full screen window. While I think that this makes as much sense as the current behavior, it makes life a bit difficult if the only message you have is IncMasterN.

The layout functions are passed a Rectangle, and a Stack of windows, and should return a list of tuples of (window, Rectangle). The simplest of the layout functions is pureLayout, which does just that. The value we want to return to get a full screen window is a list with a single tuple consisting of the first window and the initial rectangle:

import XMonad.StackSet as W

pureLayout _ r s = map (, r) . take 1 $ W.integrate s

The function W.integrate returns a list of the windows in the Stack. While this pureLayout should never be called with an empty list of windows - there's a method specifically for handling that - using map and take instead of [(head $ W.integrate s, r)] insures that we don't generate an exception should that happen. Instead, we return an empty list, which is the default behavior.

Forwarding the non-zero cases

We want the above code to run when the master pane client count is 0, and otherwise we'll let Tall handle it. That's done like so:

pureLayout (MyTall tall@(Tall n _ _)) r s =
  if n != 0 then pureLayout tall r s
  else map (, r) . take 1 $ W.integrate s

Forwarding simply unwraps the Tall Layout and passes that and the non-Layout arguments along to the Tall method.

And the rest

pureLayout is the simplest of the layout functions. If your layout function isn't quite so pure, you can use doLayout. doLayout returns a value in the X monad, so you can access the XState and XConf values. The value it returns is a tuple consisting of the list returned by pureLayout and a Maybe Layout of the same type that was passed in. The Maybe Layout allows the Layout to be modified, ala the Message handling examples.

While those two are normally sufficient, there's also emptyLayout and runLayout. Both of these return the same type as doLayout. The default implementation of runLayout calls emptyLayout if there are no windows and doLayout otherwise. You should only need these if you want special handling for the case where there are no windows. Details can be found in the API documentation.

Monday, May 12, 2014

Web apps that write like console apps

My history with the web

When the web first showed up, I was delighted. Here was a tool I could use to release cross-platform apps by releasing one app. Since I was working for a multi-platform software vendor, this was great - we had people with Macs, Windows machines, and most of the available Unix workstations. Now I could write an app once and they could all use it.
So I started automating some of the things we hadn't done before because we couldn't reach the entire audience or afford to alienate those we couldn't reach. Write an HTML page or two, the code to process the input, and then write out the results, and we're done. All fun, easy and productive.
Then something evil happened. Web templates. Suddenly, it wasn't about writing code any more. It was about writing templates, then writing code fragments to plug values into the holes. Worse yet, most template systems broke the better web text authoring tools, at least until those tools were taught about that template language. They had the same effect on web text processing tools. Writing for the web was no longer fun, easy or productive. So I stopped.
And every time I've looked at web application tools since, it seems there's been another level of complications added to paper over the problems with template systems. Routes. Adapters. Messy config files. A simple app might have more text in config files than in code. And this is seriously considered a good thing?

Application types

Console applications aren't necessarily easy to write. But the logic at least flows through them in a straightforward way. You evaluate expressions, and some of those trigger user interactions. With a web template, you're never sure when the fragments that plug things in will get evaluated. Unless, of course, the web template system makes guarantees about that. Most don't. This makes the code fragile. Again, not fun, easy or productive.
Of course, a typical graphical desktop application has many of the same problems. You provide a user interface, and then connect code fragments to it that interface elements cause to run. It's a bit more predictable than the web interface, because the fragments are controlled by UI elements instead of plugging into a template. But it's still more painful than a console application.

MFlow

I recently ran into MFlow, and for the first time in a long time found myself wanting to write a web application. MFlow makes web applications write like console applications - you evaluate expressions that trigger user interactions. Except they happen in the browser, not a terminal window.
MFlow leverages Haskell's do syntax to make the web interactions happen at the right time. In particular, what's been called the "programmable semicolon" nature of that syntax.

Example application

To show how this works, I wanted to use a simple application, so I chose a very simple game. It's known by a number of names, but the rules are easy. You start with a pile of matches, and alternate turns taking either one or two matches. The player that takes the last match wins. Which means a game - its complete state - can be represented by a single integer.

Not production quality

Note that this code is not production quality code. I've left out any kind of error checking that would obscure the code, haven't done anything to make it pretty, and in general kept it as short and simple as possible. I have tried to keep it idiomatic, though.

The game

The code below extends the Game state to have Lost/Won/Illegal indicators, the latter used when someone makes an illegal move. The functions just update a game with a move, finds the computers next move, provide an English description of a move, and of course tie those together to handle everything that happens between the human player making a move and being prompted for their next move. All completely independent of any actual interface code.

module Game (Game (..), move, prompt) where

data Game = Illegal | Won | Lost | Game Int deriving (Show)
type Move = Int

-- Create a prompt for the current game and message.
prompt :: String -> Int -> String
prompt m l = m ++ "There are " ++ show l ++ " matches. How many do you take? "

-- Given a game and a move, provide a description for the move.
describeMove :: Game -> Move -> String
describeMove g m =
    case g of
        Won      -> "You won. "
        Lost     -> "I took " ++ show m ++ " and you lost. "
        Illegal  -> "You can only take 1 or 2 matches. Taking the last match wins the game. "
        (Game _) -> "I took " ++ show m ++ ". "

-- Given a game and a move, return the Game resulting from the Move.
makeMove :: Game -> Move -> Game
makeMove g@(Game l) m | m /= 1 && m /= 2 = Illegal
                      | m >= l           = Won
                      | otherwise        = Game $ l - m

-- Given a Game, find the best move for it
findMove :: Game -> Move
findMove (Game l) | l <= 2       = l
                  | rem l 3 /= 0 = rem l 3
                  | otherwise    = 1

-- Given a game and player move, calculate computer move and
-- return (message, new game)
move :: Game -> Move -> (Game, String)
move g m = case makeMove g m of
               Illegal -> (g, describeMove Illegal 1)
               Won     -> (Won, describeMove Won 1)
               Lost    -> (undefined, "Can't happen! ")
               g'      -> let m' = findMove g' in
                              case makeMove g' m' of
                                  Won  -> (Lost, describeMove Lost m')
                                  Lost -> (undefined, "Can't happen! ")
                                  g''  -> (g'', describeMove g'' m')

Console interface

This being a very simple game, the interface is also simple. Just loop printing how many matches are left in the game and then get a move from the user. The code is below, and runnable at the FP Complete School of Haskell:

module Main where

import Game

-- loop that actually plays the game.
play :: Game -> String -> IO String
play g@(Game l) m = do
    putStrLn $ prompt m l
    x <- fmap read getLine
    case move g x of
        g'@(Game _, _) -> uncurry play g'
        (_, m')        -> return m'

-- Main entry: play the game and announce the results
main :: IO ()
main = do
       m <- play (Game 8) "Hello. "
       putStrLn m

The play function has the obvious structure: we print (with putStrLn) the prompt for this move. Then read a line from the (with getLine) and use read to convert it to an integer. We run the move function from the Game module to get a message and new game after applying the user and computer moves. If that's still of the form Game n, then the game isn't over, so we loop and do it again. Otherwise, we return the message to main. main is likewise straightforward: run the play function with a greeting to get a string describing the results, then print the results.

Web interface

If you write web apps - especially if you do it in Haskell - you might consider writing up this web app in your favorite framework. If you do it in Haskell, feel free to use my Game module! I haven't done it, because I probably couldn't escape claims of biasing the results. And besides, I'm lazy. If you do this, please provide us with a link to or a copy of your code so others can look at it!
Ok, the web application uses the same basic structure. The code is below, and runnable in the FP Complete School of Haskell.

module Main where

import MFlow.Wai.Blaze.Html.All

import Game

-- loop that actually plays the game.
play :: Game -> String -> View Html IO String
play g@(Game l) m =
    (toHtml (prompt m l) ++> br ++> getInt Nothing <! [("autofocus", "1")])
    `wcallback` \x ->
        case move g x of
            g'@(Game _, _) -> uncurry play g'
            (_, m') -> return m'
-- Main entry: play the game and announce the results
main :: IO ()
main = runNavigation "" . step $ do
    m <- page $ play (Game 20) "Hello. "
    page $ toHtml m ++> wlink () << " Another game?"

The play function looks a lot like play in the console version. The code to output HTML is a bit more complicated, because - well, we've got to produce a lot more text.  Wrap the prompt in HTML, provide a br tag. Instead of using getLine and read to get an integer, we use getInt and apply an autofocus attribute to the resulting tag. As a the final bit of IO, we use wcallback to extrract the integer rather than just extracting it directly, as that will erase the previous contents of the page. On the other hand, the rest of the function - not involving any user interaction - is identical between the two versions.
main is similar. We need a short expression to deal with running on the Web instead of in a console before the do code. The play function result is passed to a page so it runs in a web page. Likewise, the message gets translated to HTML, and we tack on a link to start over before handing that to page to display.

Comparison

While the actual display code is a bit more complicated - we are dealing with a remote display that needs things wrapped in markup - the basic structure is still the same. play prompts the user, reads the result, and then loops or exits. main just invokes play and then displays its result, though the Web version has a link to play again added to it.

Confession

While the code is pretty much idiomatic Haskell as is, I have made one change from what I'd write in order to enhance the similarity. The do in the console version desugars nicely, and I'd probably have used that version main = play (Game 5) "Hello. " >>= putStrLn if I weren't doing the comparison. The web version could be desugared, but would require an explicit lambda or an ugly transformation to point-free style, so I'd leave it as is.

Programmable semicolons?

I mentioned "programmable semicolons". That's one of the characterizations of the do syntax for Haskell. Yes, there are no semicolons in this code. Like most modern languages, Haskell makes them optional at the end of a line.
For the console code, semicolons behave pretty much as you'd expect them to. For the web code, there's a pleasant surprise in store for you. You probably tried entering an illegal move - something not 1 or 2 - at some point, and noticed that you were told the rules of the game, and then prompted again. Both versions behave the same way, and that behavior is explicit in the code.
Did you try entering values that weren't valid integers? Say a hello? If not, do so in both now. The console version exits with an obtuse error message. I did tell you that the code wasn't production quality. The web version tells you the value isn't valid, and prompts you again. Part of that is getInt - it will fail to validate if the value you input doesn't read as an Int type. The other is the "programmable semicolon" behavior: if some expression fails, that step of the display gets run again instead of propagating the failure.

Summary

I think this short demonstration illustrates nicely that MFlow allows for writing web applications with the same architecture as console applications, where that is appropriate. While setting up the display takes more work, I don't believe that can be fixed with anything that uses HTML for the display description.
On the other hand, dealing with user input is easier than a console, because all you get from a console is a string of text, whereas the MFlow framework can process it to a known type, and handles invalid input for you. In fact, if you just use getTextBox in a context where the inferred type is an instance of Read, it will work for any type. Some care must be taken if the inferred type is Maybe a, though, which getInt (and friends) avoids.
The bottom line is that it tends to balance out, and writing web apps is once again, easy, fun and productive.

Thursday, May 1, 2014

How pythonic is Haskell?

Introduction

I've recently switched to Haskell from Python, and I think it might be interesting to look at how far from being Pythonic Haskell actually is - how much of a change did I actually make?

Why Haskell?

The reason for changing was - having spent half a decade working on large - well, medium-sized these days - concurrent programs, which produce the nastiest bugs I've run into because they are seldom reproducible, the primary source of those bugs seemed to be errors in dealing with shared data objects. Getting the locking wrong, or simply failing to lock something that was shared are often the cause. A number of solutions to the first exist, but the latter seemed to be a language issue. Haskell provided the best solution I found - that a data object may be mutated is part of it's type, so failure to properly deal with them in a concurrent environment can be caught by the compiler. If that type system sounds interesting, you might want to read this article that covers its features.

The Zen of Python

It's generally agreed upon that Tim Peters (with help from others) captured the philosophy behind Python and it's libraries in a document called The Zen of Python. You can read this by running the command python -m this. So I'm going to go through each item, and see how well Haskell adheres to that element of the Python philosophy.

Scored on a scale of 1 (complete fail) to 11 (better than Python), with Python scoring a 10. That the scale goes to 11 tells you exactly how serious I am.

Beautiful is better than ugly.

They say "beauty is in the eye of the beholder, but ugly goes clean to the bone". So I want to delay this one until I've talked a bit more about Haskell to provide a basis for comparison.

Explicit is better than implicit: 9

I'm tempted to rate this as better than python since mutability is made explicit, but the do statement - with it's implicit parameters wrapping statements - detracts a bit. They are just syntactic sugar, and can be translated back to the explicit form in your head, so it's not a major problem. But they are everywhere!

Simple is better than complex: 10

Haskell programmers prefer pure code because it is simple. Being lazy is much simpler than yield.

Complex is better than complicated: 11

Monads very simple things that have very complex effects on the language and programs. They replace a number of things that are complicated in Python.

Flat is better than nested: 11

Haskell's module system is very similar to Pythons. At the language level, Haskell provides the let and where statements, which are a recurring request for Python, because they make managing the top-level namespace easier.

Sparse is better than dense: 10

Split decisions here. While the language encourages short functions - which leads to sparseness - it also encourages long sequences of combinators or filters, which can lead to dense code in the function.

Readability counts.

Also deferred to the next section.

Special cases aren't special enough to break the rules: 11

Haskell has fewer special cases than any other language I've run into. Maybe I just need to look harder?

Although practicality beats purity: 6

See the previous note. They didn't even special case numeric conversions! Meaning you either have to convert integers to a fractional type to involve them in a division, or write functions that are context sensitive to what they return. Ok, the latter isn't hard, but not enough of the builtins do it.

Errors should never pass silently: 9

The language doesn't eat errors, and generally makes doing so harder than in python. However, there are multiple ways of handling errors, leading to some confusion.

Unless explicitly silenced: 10

Yes, you can explicitly silence errors.

In the face of ambiguity, refuse the temptation to guess: 11

The type system pretty much disallows ambiguity. This carries through to much of the rest of the language. There are even language extensions that allow the programmer to explicitly declare some cases as "not ambiguous."

There should be one -- and preferably only one -- obvious way to do it: 2

Many functions and operators have multiple names, just to start with. It's not at all uncommon for combinator libraries to have multiple similar combinators, allowing the exact same process to be specified in a combinatorial number of ways.

Although that way may not be obvious at first unless you're Dutch. NA

Not being Dutch, I can't properly evaluate this one.

Now is better than never: 7

There are a number of areas that still need enterprise-quality libraries.

Although never is often better than right now: 7

The Haskell Platform - Haskell's answer to "Batteries Included" shows signs of some things being done right now that might better have been done never.

If the implementation is hard to explain, it's a bad idea: 9

Most implementations are easy to explain as long as you keep in mind that a monad is a monoid in the category of endofunctors.

In other words, Haskell programs make heavy use of monads (the language even has special syntax for them), which aren't available in commonly used languages. It's not unusual to wind up with stacks of them, so there are libraries for working with those stacks. Many implementations are easy to explain once you get those. See the simple vs. complex koan.

If the implementation is easy to explain, it may be a good idea. 9

See above.

Namespaces are one honking great idea -- let's do more of those! 6

There are no objects. While name spaces nest in let and where functions, you don't have methods. If you want to use two data types that have what would be a method name in common, you either don't use it, or arrange to refer to it by a different (qualified for people who know Haskell) name.

Beautiful is better than ugly.

And of course, "Readability counts."

The first time I saw a Haskell program, my reaction was "That's uglier than Perl". It turns out that Haskell lets programmers define operators. And they do.

While this might be a disaster with other languages, the functional nature of Haskell means that a common programming paradigm is a function that takes two functions as arguments and returns a functions that combines them in some way - a combinator. It's much easier to read - or u such things as expressions than as actual function invocations.

Even Python libraries recognize this. Things like XPath and regular expressions are generally done with strings holding expressions that are fed to functions to be parsed rather than being broken out into functions that are then combined by other functions. Haskell allows such things to be expressed in Haskell, and hence compiled and checked - including for type - by the compiler.

Of course, this means that when you learn a new library, you may have to learn a new set of operators as well as functions. And since some people want to right code left-to-right and others right-to-left, it's not at all uncommon to have three versions of a function: one that applies the left-hand operand first, one that applies the right-hand operand first, and a function name for those who want that.

The end result is code that can be beautiful and very readable - once you know the operators involved. But it in no way resembles the "executable pseudo-code" that Python has been described as.

So, I'm going to give "beautiful is better than ugly" an 8, because it encourages beautiful code nearly as well as Python does, but doesn't do much to discourage ugly code.

On the other hand, "readability counts" gets a 3, because operators tend to provide much less information than function names.

Summary

Adding them up, we get that Haskell scores an 8.2. That actually feels about right to me. Of course, you're free to disagree. In particular, some of the koans are subject to interpretation, and you may or may not agree with my interpretation, much less my scoring of Haskell on them.

If you disagree strongly, please, publish your own scoring. Or even your scoring of a third language!

And the other way

Haskell has a motto instead of a Zen - "Avoid success at all costs." Python pretty much fails at that completely.

Saturday, December 28, 2013

An arduino anemometer

Why make an anemometer?

There are lots of anemometers on the market, with a lot of different features. What they don't have is a configurable display. By coupling a good wind sensor with an android board and a cheap display, the display can be configured to do things that simply aren't possible with any of the off the shelf systems. See my rc blog for more details about this.

Hardware Interface

The interface to the Inspeed sensor is simple - you put voltage on one of the two lines in the sensor, and it'll close the circuit and give you a pulse back each rotation. That's documented at 2.5mph for one rotation a second. Some of their newer sensors have higher pulse rates - and possibly different interfaces - so consider your application when choosing one.
With that interface, I tied the wind sensor pulse line to an interrupt whose handler just counted pulses:
// The wind sensor interrupt data
volatile static uint16_t sensor_count = 0 ;

// And handler
static void
sensor_tick() {
  sensor_count += 1 ;
}
A system timer was then used to generate a second interrupt that recorded and reset the current count and did some bookkeeping:
volatile static uint8_t current_sensor ;
volatile static uint8_t readings[SPEED_LENGTH] ;
volatile static uint16_t next_speed = 0 ;
volatile static uint8_t wrapped = 0 ;

void
recorder_tick() {

  current_sensor = sensor_count ;   // Warning - could lose a tick here...
  sensor_count = 0 ;
  readings[next_speed++] = current_sensor ;
  if (next_speed >= SPEED_LENGTH) {
    wrapped = 1 ;
    next_speed = 0 ;
  }
}
The main loop of the program just reads the reading array to calculate the various values displayed. Most of it is uninteresting, but will be included in the code segment at the end.

Display

The goal was to display an easy to see indicator for max and average wind speed. The one interesting part of that is how the color is calculated. We want different colors for different wind conditions, as shown by this table:















Average↓ Gust→LowMediumHigh
LowGreenCyanYellow
MediumNPBlueMagenta
HighNPNPRed
If it's not obvious, the NP ones are cases where the gust speed is lower than the average, which is impossible.
In this day and age, when system memory is measured in gigabytes and programs easily top hundreds of megabytes, a straightforward else-if chain with some nested conditions to select color values is the obvious way to do things. If we have to repeat a few conditions, well - they don't take enough memory to matter.
Except we're running on an Arduino mini, where memory space is measured in K, and every byte might count. So I have an excuse to go old school, and figure out how to squeeze the color calculation code into the least amount of space.
Here's the code to set the color on the LCD:
  // Pick a color...
  memset(colors + 1, 0, 3) ;
  colors[color_of(avg) + 1] = 255 ;
  colors[color_of(top) + 1] = 255 ;
  do_command(&lcd, 4, colors) ;
The first byte of the colors array is the set color command for the LCD. The other three are the values to use for red, green and blue - in that order. So for both the average and top wind speed, we're going to set either red, green or blue to on instead of off. Given that, the color of a wind speed just needs a cunning plan1:
// Select a color with a cunning plan
static uint8_t
color_of(uint8_t speed) {
  if (speed >= RED_SPEED) return 0 ;
  if (speed >= BLUE_SPEED) return 2 ;
  return 1 ;
}
So if the two values are the same, we get either either a red (no flying now), blue (maybe the larger craft) or green (why aren't you outside flying?) display. If the gusts are noticeably faster, we'll get cyan, yellow or possibly magenta instead.

Conclusion

This actually works quite well in practice - that green is really obvious, and the others are closely enough related that I can remember which is which without to much trouble.
Here's the complete Sketch:
#include <TimerOne.h>
#include <SoftwareSerial.h>
#include <string.h>

// Values that control display
#define BLUE_SPEED  8   // Minimum speed for blue display
#define RED_SPEED   21  // Minimum speed for red dispaly

/*
 * Values that control recording. Note that DELAY_SECONDS and
 * DELAY_SECONDS * TREND_BUCKETS must both go evenly into RECORD_TIME.
 */
#define RECORD_TIME 900 // Length of the recording
#define DELAY_SECONDS   1   // Sample time length. Max of 8.
#define TREND_BUCKETS   3   // # of buckets to use for trend checking.

// Physical constants
#define SENSOR_INT  1   // Meaning pin #3
#define SERIAL_OUT  2   // Serial line to the display
#define LINE_LENGTH 16  // Length of output line on LCD

// Display (TREND_* maps to first custom char for that trend).
#define TREND_UNKNOWN   0
#define TREND_DOWN  2
#define TREND_EVEN  4
#define TREND_UP    6

// 1 PULSE/SEC = 2.5 MPH
#define PULSES_TO_SPEED(count) (((count) * 5 + DELAY_SECONDS) / (DELAY_SECONDS * 2))

// The wind sensor interrupt data
volatile static uint16_t sensor_count = 0 ;

// And handler
static void
sensor_tick() {
  sensor_count += 1 ;
}

// The recording instrument data
#define SPEED_LENGTH (RECORD_TIME / DELAY_SECONDS)

// Length of trend buckets in the record
#define BUCKET_LENGTH   (SPEED_LENGTH / TREND_BUCKETS)

volatile static uint8_t current_sensor ;
volatile static uint8_t readings[SPEED_LENGTH] ;
volatile static uint16_t next_speed = 0 ;
volatile static uint8_t wrapped = 0 ;

void
recorder_tick() {

  current_sensor = sensor_count ;   // Warning - could lose a tick here...
  sensor_count = 0 ;
  readings[next_speed++] = current_sensor ;
  if (next_speed >= SPEED_LENGTH) {
    wrapped = 1 ;
    next_speed = 0 ;
  }
}

// Select a color with a cunning plan
static uint8_t
color_of(uint8_t speed) {
  if (speed >= RED_SPEED) return 0 ;
  if (speed >= BLUE_SPEED) return 2 ;
  return 1 ;
}

SoftwareSerial lcd = SoftwareSerial(0, SERIAL_OUT) ;

void
do_command(SoftwareSerial *lcd, uint8_t len, const char *data) {
  lcd->write(0xFE) ;
  while (len--)
    lcd->write(*data++) ;
  delay(10) ;
}


void
setup() {
#ifdef MAKE_CHARS
  char chars[][11] = {
    {0xC1, 1, 0, B01001, B00101, B00011, B11111, B11111, B00011, B00101, B01001},
    {0xC1, 1, 1, B10010, B10100, B11000, B11111, B11111, B11000, B10100, B10010},
    {0xC1, 1, 2, 4, 2, 1, 0, 0, 0, 0, 0},
    {0xC1, 1, 3, 0, 0, 0, B10001, B01001, B00101, B00011, B11111},
    {0xC1, 1, 4, 0, 0, 0, B11111, B11111, 0, 0, 0},
    {0xC1, 1, 5, B01000, B00100, B00010, B11111, B11111, B00010, B00100, B01000},
    {0xC1, 1, 6, 0, 0, 0, 0, 0, 1, 2, 4},
    {0xC1, 1, 7, B11111, B00011, B00101, B01001, B10001, 0, 0, 0},
    } ;
#endif

  pinMode(3, INPUT_PULLUP) ;
  attachInterrupt(SENSOR_INT, sensor_tick, RISING) ;
  Timer1.initialize(1000000 * DELAY_SECONDS) ;
  Timer1.attachInterrupt(recorder_tick) ;

  lcd.begin(57600) ;
  delay(10) ;

#ifdef MAKE_CHARS
  do_command(&lcd, 33, "\x40 Meyer Heliport Wind Conditions ") ;
  for (uint8_t i = 0; i < 8; i += 1)
    do_command(&lcd, 11, chars[i]) ;
#endif
  do_command(&lcd, 1, "\x4B") ;     // Turn off underline cursor
  do_command(&lcd, 1, "\x54") ;     // blinking cursor
  do_command(&lcd, 1, "\x52") ;     // and autoscroll
  do_command(&lcd, 2, "\xC0\01") ;  // Get arrows
}

void
loop() {
  int8_t trend ;
  int16_t end, start ;
  uint8_t cur, top, now, avg, bucket_count ;
  uint16_t i, bucket_counter ;
  uint32_t sum, bucket_sum ;
  int32_t buckets[TREND_BUCKETS] ;
  char bhold[LINE_LENGTH + 1], colors[4] = {0xD0} ;

  end = next_speed ;
  cur = current_sensor ;
  start = wrapped ? end : 0 ;
  trend = TREND_UNKNOWN ;
  if (!wrapped && end == 0) {   // Haven't gotten a sample yet.
    avg = top = cur  ;
  } else {          // Do some stats...
    i = start ;
    bucket_sum = sum = bucket_counter = bucket_count = top = 0 ;
    do {
      now = readings[i] ;
      if (now > top)
    top = now ;
      sum += now ;
      bucket_sum += now ;
      bucket_counter += 1 ;
      if (bucket_counter == BUCKET_LENGTH) {
    buckets[bucket_count++] = bucket_sum ;
    bucket_counter = bucket_sum = 0 ;
      }
      i += 1 ;
      if (i >= SPEED_LENGTH)
    i = 0 ;
    } while (i != end) ;
    avg = sum / (wrapped ? SPEED_LENGTH : end) ;

    if (bucket_count == TREND_BUCKETS) {
      for (i = 0; i < TREND_BUCKETS - 1; i += 1) {
    if (abs(buckets[i] - buckets[i+1]) > max(buckets[i], buckets[i+1]) / 8) {
      trend = TREND_EVEN ;
      break ;
    } else if (buckets[i] < buckets[i + 1]) {
      if (trend == TREND_UNKNOWN)
        trend = TREND_UP ;
      else if (trend == TREND_DOWN) {
        trend = TREND_UNKNOWN ;
        break ;
      }
    } else if (buckets[i] > buckets[i + 1]) {
      if (trend == TREND_UNKNOWN)
        trend = TREND_DOWN ;
      else if (trend == TREND_UP) {
        trend = TREND_UNKNOWN ;
        break ;
      } else if (trend == TREND_UNKNOWN) {
        trend = TREND_EVEN ;
      }
    }
      }
    }
  }

  avg = PULSES_TO_SPEED(avg) ;
  top = PULSES_TO_SPEED(top) ;
  cur = PULSES_TO_SPEED(cur) ;

  // Now, display the data
  do_command(&lcd, 1, "\x58") ; // clear screen
  do_command(&lcd, 1, "\x48") ; // go home

  // Pick a color...
  memset(colors + 1, 0, 3) ;
  colors[color_of(avg) + 1] = 255 ;
  colors[color_of(top) + 1] = 255 ;
  do_command(&lcd, 4, colors) ;

  lcd.println("Cur Avg Max Tnd") ;
  snprintf(bhold, LINE_LENGTH + 1, "%3d %3d %3d  ", cur, avg, top) ;
  lcd.print(bhold) ;
  lcd.write(trend) ;
  lcd.write(trend + 1) ;
  delay(1000) ;
}

  1. If you don't think it's a cunning plan, you're probably not a fan of Blackadder

Friday, November 15, 2013

Threads vs. processes for parallel processing

Introduction

Concurrency is a hot topic these days, largely because multi-core systems are the norm rather than the exception, and have been for a while. The thing is, multi-core isn't as much of a win for concurrent operations as it is for parallel processing. While the difference is subtle, it's also crucial for deciding what tools you want to use.

Some definitions

So, what is concurrent operations, parallel processing, and the tools I'm talking about? Or more importantly, given that they are used in a number of different ways in the industry, how am I using them?

Parallel processing

Parallel processing means carrying out multiple computations at the same time. Waiting for external devices is not carrying out a computation, so doing computing while IO is going on isn't as parallel processing - at least not in this document.

Concurrent operations

Concurrent operations means doing multiple things at the same time. Unlike parallel processing, concurrent operations includes doing IO. In fact, doing nothing but IO can qualify. Parallel processing is a form of concurrent operations, but I'm going to exclude parallel processing from the definition of concurrent operations in this document as a matter of convenience.

Thread of execution

A thread of execution is a series of instructions that are to be executed by the CPU in order. The operating system may interrupt the thread and allow other threads of execution to operate between instructions in a thread. This really isn't a term used in the industry, but I wanted a term that covered both threads and processes. I'm going to use it to avoid saying threads and processes or thread or process all the time.

Process

A process is a thread of execution plus an address space. The address space is not shared with other processes. Unix was a revolutionary step forward for processes. Before Unix, processed tended to be expensive and painful to create. Users usually got one process, and running a program would involve the command processor (which was running in the users process) loading the executable into the users address space and then calling the executable as a subroutine. It was possible to do concurrent operations in a process, and even have multiple threads of execution active, but it involved executable images specifically built for the purpose, since loading a standard executable would destroy the one that was running. By making processes easy and cheap to create from standard binaries, Unix allowed the command processor to run in it's own process, meaning that concurrent operations could be done by simply having the command processor not wait for a program to finish before prompting the user for the next command.

Thread

A thread is a popular - and very effective - tool for doing concurrent operations. A thread is just a therad of execution that shares its address space with other threads. Threads work very well for concurrent IO, since you can just code the IO normally, and then sync with the main process when it's done. For parallel processing, not so well. I'm going to discuss those problems next.

The problems with threads

Using threads creates a couple of problems that using processes avoids.

Shared everything

Actually, there's only one problem with threads, but it creates a number of issues. The problem is that all your data is shared implicitly. The immediate affect of this is that you have to figure out which of your data objects might be modified in different threads, and do something to make sure that doesn't happen concurrently.

In my experience, the biggest source of concurrency problems in threaded programs is programmers who didn't realize that some object they were modifying could be modified by another thread. I have seen the claim - from people I trust and respect - that a properly designed programming language (meaning not C++ or Java) and programming practices can avoid these problems. I haven't used that language, and find it hard to believe that masses of programmers will all follow proper practices. Maybe if a single entity controlled the language and everyone learned from their examples - which may well be the case for this language.

Elsewhere, functional programming languages that make most data immutable are catching on as a solution, allowing either the language runtime or type system to catch such errors.

Once you've identified those places, you've got to arrange to provide the appropriate interlocks. Language constructs that do this automatically are another hot research topic.

Fragility

If your program does something completely unexpected, then you may well have no idea what state it's in. The question becomes - how do you deal with this situation? If you are using threads for concurrency, then you have to worry about what the rogue thread may have done to data being processed by another thread, even if it they were logically unconnected - one of the consequences of implicitly sharing everything. To be reasonably sure that everything that might have been corrupted is discarded, you have to discard all the threads in the process.

If you using processes for concurrency, discarding that process provides the same level of assurance, but that's only one execution threads worth of work.

So using threads makes your programs fragile.

No distributed processing

If you're dealing with any significant amount of data, you're going to want to have multiple machines working on it. If your application is built using processes for concurrent operations and sockets for data sharing, this is trivial. All you have to do is add some kind of rendezvous mechanism.

If you're using threads, on the other hand - well, you pretty much have to start over to get any communications between the execution threads at all.

Sharing data between processes

Given that processes don't share an address space, the obvious issue is how do you get data between them. I've actually had proponents of threads claim that you couldn't share data between processes without serializing it (though to be fair, they were probably trolling). While that is one solution - and pretty much required for large problems - there are alternatives, each with their own downside.

The important thing to realize is that any non-trivial program will have multiple types of information to be shared, each of which will have it's own requirements. Using threads provides a single solution you have to use, even if it doesn't fit the requirements for the data at hand. If you're used to that model, you'll need to get past it and think about how you're actually using your data.

Explicit sharing

You can always share memory explicitly between processes. This provides the same level of communications as threads, without the nasty implicit sharing of everything. The downside is that I know of no language that provides language-level support for insuring that everything you need to share winds up in the shared memory segment. This means you are in practice limited to machine primitives composed with arrays and records. In my experience this hasn't been a problem, as most of the cases where you need this level of communications use such data.

Serial communications

You can also serialize the data and write it outside of your process. This is required if you want to distribute the processing of your data to more than one machine. The downside is that - well, compared to shared memory, it's expensive. You have to serialize and deserialize the data, and copy it between the processes in serialized form. If you have lots of communications going on (not the same thing as lots of data - passing one byte back and forth a million times is as bad as passing a megabyte back and forth once), this won't work very well. Unless, of course, you more data than will fit in the address space, in which case you pretty much have to do things this way.

Disk communications

An interesting variant on the idea of serial communications is storing the data on disk. This has the advantage of preventing data loss.

You store each chunk of data in a file in a queue directory. Each worker process scans the queue directory, trying to get an exclusive lock on each file until it succeeds in doing so. It processes the data therein, writing the output to a locked file in a second queue directory. When it's done, it can unlock the output file and then remove the input file. There's no chance of data loss, though you could wind up with duplicated data. Adding a work directory between the two queues reduces that problem, but then requires code to clean up the work directory at startup, moving input files back to the input queue and removing output files.

Inheritance

If you have a data structure small enough to fit in memory that all the processes need read access to, and any writes are not shared between processed, inheriting the data by creating it in the parent and forking to pass it to children works very well.

Configuration data for the application often fits this mold. Having all the configuration data handled by the parent - rather than having it in distinct files and the parent passing along the file name - means the parent can verify things between processes, like that the output of one process is correctly connected to the input of the next. Better yet, it can provide intelligent defaults for some values, and share configuration information between process, which has resulted in reducing configuration information by 99% in at least one case.

The downside is that the communications is one-directional. If you need to communicate a value back to the parent, you'll have to find another method to do so.

Exit status

This is another one-directional channel, and is very narrow - basically a single byte. While it doesn't seem very useful, it's used all the time in Unix systems. In every conditional statement in a shell script, the condition being tested is the exit status of a child process (unless it's a builtin, anyway). Any application that does process-based concurrency needs to deal with these, if for no other reason than to check that the child process actually finished properly.

A real-world example

I was involved in refactoring a credit card payment processing system from a single monolithic, threaded program into a distributed application.

Being distributed was part of the requirements, as one of the goals of the refactoring was to decrypt the financially sensitive data on machines that weren't exposed to the network. So serial communications was used to between hosts on the periphery of the network and the internal hosts with the crypto hardware.

The configuration information on each host was created by the parent process and inherited by the workers.

Each machine had a state table for all open connections to it. This was explicitly shared between the processes, so that the parent could correctly deal with child process failing, so that 1) no transaction was reported as completed until it was actually completed, and 2) no transaction was processed twice. Further, it was shared via a file mmaped to disk, so that these same guarantees could be made if the entire machine failed.

Miscibility

OK, so why not just mix and match threading and processes? Well, there's one last little issue with everything being implicitly shared. In order to fork safely, you need to know things about the state of all the locks - both external and internal - in your program. Getting that information becomes much more difficult if you have locks being used in multiple threads - and when do you use threads without some form of locking?

While not every case where a lock is held by a thread that isn't created in the child and hence won't be freed in the child is a disaster, it is in a lot of them. If an internal lock is locked by another thread when you fork and you need it, you're hosed because you'll never be able to get it. External locks you may or may not be able to get, even if you shouldn't be able to if the parent has them, and freeing them may or may not cause them to be freed for everyone, even if the parent is still using them.

Mixing threads and forks arbitrarily can be done, but - well, it's not at all clear it's less trouble than simply staying with threads for everything. On the other hand, if you limit each process to one or the other - parents never create threads but use forks to create workers, which use threads if they need them, but never fork - then you'll be OK.

Except, of course, if you're building something like a CI system, where what the workers do is fork off processes to build things. In that case, workers should avoid using threads.