Custom Search

Friday, February 25, 2011

Adding a VCS to zsh's vcs_info

Modern shells have features that let you add dynamically changing information to the shells prompt. Using these, it's fairly easy to add information from your version control software about your current working copy - if you're in one - to your prompt. Just google for prompt shell vcs and you'll probably get a hits on a couple of different pages with instructions (different instructions, naturally) for doing so.

While this is fine if you work with one VCS all - or even most - of the time, it doesn't really work very well for a consultant, who generally has to work with the VCS their clients are already using, and so changes VCS's on a regular basis. While modern DVCS's alleviate that to some degree by being able to interoperate with other VCSs, they generally don't get them all. I have working copies from svn, perforce, git, mercurial, fossil, and even a few still using CVS on my local machine.

Fortunally, zsh has a solution for this. Among it's contributed modules is vcs_info, which include a command that figures out if you're in the workspace of some VCS, and extracts the info from that VCS into a fixed set of variables you can use to set your prompt. Again, setting this up for a supported VCS is fairly easy, and if you googled for prompt zsh vcs earlier you probably found a couple of pages with instructions (still different, though) on this.

Fossil - which is what I'm using most now - wasn't among the (actually rather impressive) list of supported VCSs. Nor did google turn up instructions on adding one. Having gathered that information from reading the source, I hope to save those following in my footsteps by documenting the steps - at least for the current vcs_info implementation (in zsh 4.3.11).

Personally, I prefer using zsh's RPROMPT feature for such information, which places it on the right side of the screen instead of the left. This keeps the normal prompt uncluttered and short. Here's a screen capture of vcs_info at work, showing the changes to the RPROMPT:

Following along (on the right), the first line just has a directory name. The command opens a working copy in that directory, and the RPROMPT changes to add the vcs text |fossil:trunk@ab7d5 in green. That tells me I'm in a fossil working copy, on the trunk with change set ab7d5 checked out. I create a new file and use the fossil add command to add it to the repository. While the RPROMPT has the same text, the vcs info changes color to red, telling me that I've got uncommitted changes in the workspace. That's traffic color codes - green means Go on and do destructive things here, whereas red means Stop and save the changes before doing destructive things here.

Checking that change in changes the vcs part of the RPROMPT color back to green, but with a different change checked out after the @-sign. I now create another change, and check it in on a new branch. The RPROMPT obligingly changes the vcs text to |fossil:test@8f9d3, showing the new branch and change set. I update back to the trunk - and the RPROMPT follows - and merge that change to show off the last feature. After I've started the merge, the vcs part of RPROMPT becomes |fossil@merging:trunk@108f1 - and in red. This tells me that the uncommitted changes are there because they're from a merge command. I commit those, and the RPROMPT chanegs back to green and loses the merging indicator.

Detecting the VCS

For vcs_info to work with your VCS, you need to provide two shell scripts. One looks up the directory tree to see if it can find the root directory of a working copy checked out from that VCS. Fortunately, there are vcs_info tools to make this easy, so that the script for that is rather short. It's name is VCS_INFO_detect_vcs, where vcs is the command to run the VCS in question. So for fossil, it has to be named VCS_INFO_detect_fossil, and contains:
## vim:ft=zsh
## fossil support by: Mike Meyer <mwm@mired.org>
## Distributed under the same BSD-ish license as zsh itself.

setopt localoptions NO_shwordsplit

[[ $1 == '--flavours' ]] && return 1

VCS_INFO_check_com ${vcs_comm[cmd]} || return 1
vcs_comm[detect_need_file]=_FOSSIL_
VCS_INFO_bydir_detect . || return 1

return 0

After the comment header, the script checks to see if it's being asked about flavours, which fossil doesn't support, so it exist with a 1. flavours are used for DVCSs that can talk to different servers to indicate which flavour of server this working copy is cloned from. If fossil had such flavors, it would print out a space-separated list of flavors and then exit with a 0. In the body, it would then overwrite vcs_comm[overwrite_name] with the flavor name, which it presumably figures out by sniffing through the on-disk VCS data. You can see this at work in either VCS_INFO_detect_git or VCS_INFO_detect_hg.

The next three lines do the actual work. The array $vcs_comm is used to pass around information about the VCS. In particular, ${vcs_comm[cmd]} contains the command name, and VCS_INFO_check_com will try and find that command. If it fails, the script will fail as a result.

VCS_INFO_bydir_detect does most of the work. It will walk up the directory tree starting at the current working directory, looking for a directory whose name matches it's argument (since most VCS's store their metadata in a directory at the root of the working copy). If vcs_comm[detect_need_file] is set, then said directory needs to contain the file name it contains as well. Since fossil only creates the file _FOSSIL_ in the root directory, it looks in the current directory (.) for the file _FOSSIL. VCS_INFO_bydir_detect will fill in yet more of vcs_comm and exit with success if it finds this, otherwise it exists with failure. If it exists with failure, our script does likewise.

The last line is our exiting with success, indicating that the current directory is in a fossil working directory, and that vcs_com has been filled out.

Getting the VCS information

The other half of adding support for a VCS is the VCS_INFO_get_data_vcs script. For fossil, this file is VCS_INFO_get_data_fossil, and can be found in it's entirety in the mired-in-code repository.

The crucial step in VCS_INFO_get_data-fossil is the invocation of VCS_INFO_formats near the end. This is used to pass the information from the VCS to vcs_info so it can set the shell variable used in your prompt correctly:
VCS_INFO_formats "$action" "${fsbranch}" \
    "${fsinfo[local_root]}" '' "$changed" "${fsrev}" \
    "${fsinfo[repository]}"
return 0
Those variables are, in order,
  1. Any action currently going on (i.e. - the merging text in the RPROMPT).
  2. The value to use for the branch name (which actually includes the revision information - more on that later).
  3. The root directory for this repository
  4. Whether or not there are unstaged changes (this is a git thing, not used by most VCS's, and empty here).
  5. Whether or not there are any uncommitted changes (which triggers the change to red in my RPROMPT).
  6. The revision number of the checked out revision.
  7. Whatever miscellaneous information is felt to be appropriate here. For fossil, I use the repository file.
The job of the VCS_INFO_get_data_vcs script is to gather that data. Fossil provides all the data vcs_info needs via the status command, so the script starts by running that and save the results in an array:
${vcs_comm[cmd]} status | \
     while IFS=: read a b
     do fsinfo[${a//-/_}]="${b## #}"; done
fshash=${fsinfo[checkout]%% *}
changed=${(Mk)fsinfo:#(ADDED|EDITED|DELETED|UPDATED)*}
merging=${(Mk)fsinfo:#*_BY_MERGE*}
if [ -n "$merging" ]; then
   action="merging"
fi
The output of the status command is fed to a while loop that loads the values into the fsinfo array, turning - into _ in the keys and squeezing spaces from the values. It then pulls some values that need to be modified from that: the hash value has the date stripped off the end (but keep the entire 40-digit hash for the user!), whether or not there are changed files is determined by looking for ADDED|EDITED|DELETED|UPDATED in the output, and the action - whether or not there is a merge - is done by looking for *_BY_MERGE, which causes us to set action to merging if present.

The next step is to build the revision string. The fossil code adds the zstyle variable fsrevformat to let the user control the revision string format in one place - where by control, I mean decide how many digits they want to see. The revision data shows up in a number of places, but it will always be formated as per fsrevformat, unless the user overrides that somehow:
# Build the revision display
zstyle -s ":vcs_info:${vcs}:${usercontext}:${rrn}" \
    fsrevformat revformat || revformat="%.10h"

hook_com=( "hash" "${fshash}" )

if VCS_INFO_hook 'set-fsrev-format' "${revformat}"
then
    zformat -f fsrev "${revformat}" \
        "h:${hook_com[hash]}"
else
    fsrev=${hook_com[rev-replace]}
fi

hook_com=()
The first part of this fetches the value of the fsrevformat variable for the current context (you'll need to read the zsh documentation for an explanation of contexts - I'll just say it lets variables change values depending on what's going on), and sets it to a default of %.10h, which means use the first 10 digits of the hash.

The hook_com array is used to communicate values to user-provided hooks, if present. For setting the revision value here, hook_com gets the full hash value with the key hash. The script checks for the hook for setting this variable - named set-fsrev-format - and either set the value ourselves with the zformat command if there isn't one, or invoke the hook if it exists. Finally, the script clear the hook_com array for later use.

The last step is very similar, but not always followed: set up the branch information:
# Now build the branch display
zstyle -s ":vcs_info:${vcs}:${usercontext}:${rrn}" \
    branchformat fsbranch || fsbranch="%b:%r"

hook_com=( branch "${fsinfo[tags]%%, *}" \
           revision "${fsrev}" )

if VCS_INFO_hook 'set-branch-format' "${fsbranch}"
then
    zformat -f fsbranch "${fsbranch}" \
        "b:${hook_com[branch]}" \
        "r:${hook_com[revision]}"
else
    fsbranch=${hook_com[branch-replace]}
fi

hook_com=()
The logic here is identical to the logic for setting the revision format:
  1. Get the appropriate variable - in this case branchformat, which I believe should be used by all the vcs_info facilities. The default value of %b:%r is documented by in the zshcontrib manual page.
  2. Set up hook_com with the branch and revision information. Note that it gets the revision number as previously formatted as well as the branch name.
  3. Check for a hook, and either run it or use zformat to format the branch text.
  4. Clear out hook_com to avoid confusing anyone that might follow us.
After this is done, we just run the VCS_INFO_formats function with the data we've collected filled into the right spots.

My zsh configuration

If you just use this, you won't get the results I displayed above. For that, you need the following set in your running zsh:

# Now reset the prompt to get colors
colors

# Turn on and configure the version control system information
autoload -Uz vcs_info
precmd () { vcs_info }
zstyle ':vcs_info:*' get-revision true
zstyle ':vcs_info:*' check-for-changes true
zstyle ':vcs_info:*' formats '%u%c|%s:%b'
zstyle ':vcs_info:*' actionformats '%c%u|%s@%a:%b'
zstyle ':vcs_info:*' branchformat '%b@%r'
zstyle ':vcs_info:*' unstagedstr "%{$fg_no_bold[red]%}"
zstyle ':vcs_info:*' stagedstr "%{$fg_no_bold[yellow]%}"
zstyle ':vcs_info:*' enable fossil hg svn git cvs # p4 off, but must be last.

# vcs-specific formatting...
zstyle ':vcs_info:hg*:*' hgrevformat "%r"
zstyle ':vcs_info:fossil:*' fsrevformat '%.5h'
# Silly git doesn't honor branchformat
zstyle ':vcs_info:git*:*' formats '%c%u|%s@%a:%b@%.5i'
zstyle ':vcs_info:git*:*' actionformats '%c%u|%s@%a:%b@%.5i'

# now use the blasted colors!
setopt PROMPT_SUBST
RPROMPT='%{$fg_no_bold[magenta]%}%~%{$fg_no_bold[green]%}${vcs_info_msg_0_}%{$reset_color%}'
This turns on colors by name, then configures the general vcs_info appearance. There's a little VCS-specific tweaking for hg, fossil (their revision format variables) and git (which ignores branchformat for some reason). Finally, just set RPROMPT to use all this.

Saturday, February 19, 2011

Fossil - a sweet spot in the VCS space

Over the course of my career, I've dealt with clients using a variety of different VCS systems, starting with RCS, and including at least one that was developed by the client for inhouse use. I recently had reason to try the generally little-known fossil DVCS, and was pleasantly surprised.

The Sweet Spot

Fossil, unlike other DVCSs I'm familiar with, doesn't include the repository in the working directory. Instead, the repository can be on any locally accessible disk - much like some of the server VCS systems running without a server. You can then check out multiple workspaces - in different directories, of course - from the same repository, again like a server VCS. This has a couple of advantages.

First, consider the following scenario. I'm working on the next version of a clients product, busy adding the cutting-edge features that keep it in constant demand. A "how did we release it with that" bug (introduced by some other developer) shows up, and I'm asked to fix it yesterday. After getting the fix tested and committed to the appropriate release branch, I'll want to evaluate it and possibly merge it to the development branch I've been working on. We can ignore the hard part of dealing with the bug - debugging our processes to figure out how we managed to release a product with such a nasty bug in the first place.

In general, I have two choices in how to work on this bug. I can either commit any outstanding changes in my current workspace and then change it to the release branch, or I can use a different workspace. Doing the commit is painful, unless I happen to be close to one. Sufficiently so that more than one VCS (including recent snapshots of fossil) has tools for saving and restoring uncommitted changes while moving between branches, or even to move them between branches. With a server VCS, creating a second workspace is easy - I just create the second workspace and check out the release branch. In practice, I probably already have such a workspace set up, because I'll have been dealing with the release code all along. Merging the fix back into development is trivial - I just issue the appropriate merge command.

With most DVCSs, creating a second workspace involves creating a second repository. The real difference between a DVCS and a server VCS will come when I decide to merge the fix from the release branch. I'll have to pull the fix into my development repository before I can do the merge. Exactly how hard this is will depend on the VCS configuration. If the branches are actually branches in a central repository, then I can just update and merge. If the branches are represented by different repositories, then I'll have to pull from the release repository - or my release repository - instead of just updating before I can do the merge. Either way, it's a slightly more complicated process, making it take just a little longer and be just a little bit more likely to go wrong.

With fossil, I can do things either way: I can create a new repository for the release branch and open a workspace from that, or (if the branch is represented by a fossil branch) I can check out the release branch from the repository I'm already using in a new workspace. By not tying repositories to workspaces, fossil has the flexibility of both server VCSs and DVCSs.

Another place fossil can act like either a VCS or DVCS is in copying changes to/from a remote URL. If set to autosync mode, a commit to a local repo will automatically push to the default remote rep, and an update from the local repo will do a pull from the remote repo before doing the update. In essence, fossil acts just like a server VCS if you turn autosync mode on. This is a sufficiently popular mode of operation that other DVCSs can do this, though they usually requires a little external help.

A final place fossil hits near the sweep spot is in workspace pollution. Being a long-time Perforce user, I find that VCS systems that leave lots of VCS metadata files in my workspace a pain. I constantly find myself running some generic unix command for dealing with trees, seeing those files pop up in the results, cursing, and then either running the VCS-specific command for that purpose, or re-running the unix command with an appropriate filter added to remove them. Perforce wins this one - it doesn't put metadata files in my workspace. Fossil is a close second, having only one.

Philosophy


One attractive thing - to me, anyway - about fossil is the philosophy. The developers believe that development history is in the past, and hence should be unchangeable. Most server VCS's do this as well (though the knowledgeable can edit the on-disk representation of history in an emergency), but DVCS developers seem to think that creating a linear development history in the repository is important functionality, even if the result is a lie. Fossil doesn't support this (again, modulo hand editing the on-disk repository), which I find attractive, though others may not.

The target audience for fossil is the small workgroup. It isn't designed for handling a distributed systems with thousands of developers and multiple layers of evaluators looking at patches and either rejecting or committing and passing them upstream to eventually reach the release engineer. While it seems to have all the features required to work in such a project, that is not it's target, and I'm not really qualified to decide whether it would work as such or not.

Other wins


Installation


What prodded me to look at fossil in the first place was needing a DVCS that ran on a relatively old platform. After a day trying to get one of a more popular DVCS's requirements to find a third party library that comes with or as an optional package on most modern Unix systems, I decided to see if fossil could be used instead. While it didn't build out of the box - it needed that same third party library - turning off a config option for features I didn't need easily solved that.

Even better, compared to other DVCSs, fossil is a simple install. There's just one file - the fossil binary - to install. Or remove if you decide you don't like it, or update if you want to install a newer version.

Server options

Like most DVCSs, fossil has a command to start a server so it can be used for ad-hoc push/pull/cloning over the network. Unlike most of the others, the fossil binary can be used that way in most common servers with little or no setup. It can run natively out of inetd, passing it either a repository or a directory of repositories (each ending in .fossil) as an argument to serve either that repository or the repositories in that directory.

Fossil can also run a a CGI script, requiring a two or three line fossil script - yes, it starts with #!/usr/bin/fossil or the equivalent - that points to the repository or a directory of repositories as per the inetd invocation.


While these options might not be suitable for a large project, they are perfectly adequate for the small workgroup that is fossils target audience.

A complete project management solution

DVCS's that do VCS operations over HTTP often provide ways to get human-readable information out of the repository. After all, this is a thing people commonly want to do with source repositories. So it's not surprising that fossil does that. But the fossil server does authentication, and allows logged in users to perform pretty much any operation that can be performed at the command line. This is the recommended GUI for those who want one.

Further, the fossil server provides the facilities you expect to find on a repository hosting services: a wiki, a blog, and an integrated issue tracking system, plus the ability to customize all of those things. Even better, having all of these in a distributed VCS means they are all distributed as well. You can edit wiki pages or modify tickets locally, and then push them out to a remote repository.

While none of these facilities is has as many features as a complete external project, all of them are perfectly adequate for for the small groups that this project targets.  Further, installation and configuration - when compared to, say, setting up mercurial and trac - is nearly trivial.

Almost all of my clients would have been able to use the fossil tools, and in many cases they would have wound up with a better solution than we were using at the time.

Downsides


Like all things, fossil isn't perfect. It's relatively new, and still under development. So like many such open source projects, the documentation is a bit on the thin side, and what there is is a bit disorganized.

Some features - especially those mostly useful on larger projects, like support for subrepositories or code review integration - aren't available, or at least not yet.

Releases seem to be a bit haphazard - the user bases appears to still be mostly developers, and the ease of installation makes tracking sources an easy way to stay current.

The lack of a rebase facility - among other things - may alienate some users, but this is getting into a matter of taste - exactly what features do you want in your VCS?

It's not clear that all of the features in fossil should be bundled in one program - modularity is generally a desirable property. On the other hand, wikis and tickets are generally version-controlled, and using the same VCS for them as for source has a certain attraction to it.

Finally, googling for help with fossil tends to produce a lot of hits about bones rather than VCS software.

Summary


If you're running a project large enough that changing SCM's is a  project, then fossil probably isn't for you. On the other hand, if you're working on a small project that doesn't have wiki or issue tracking system and could use one, and have a system that you can host binaries on (or you can talk the provider into installing the fossil package for use in your CGI scripts), then fossil is well worth taking a look at, as setting up the fossil server is easier than setting up just the VCS server for most VCS's, and that gets you a wiki and issue tracking system in a single step.

Sunday, February 13, 2011

Did I just shut down the oldest site on the web?


Yesterday morning I turned off a site I believe had a legitimate claim of being the oldest active site on the web. By "site", I don't mean the URL, or the software that it was running, or the hardware it was running on, but the actual site files - the HTML and related data that define what the site looks like when you visit it. Let me explain...

I first set up my personal web site in late 1992 or early 1993. At that time, the web was small enough that there were still people maintaining - manually, no less - lists of every site on the web. A fast connection for your home computer was a  9600 baud modem, though a few lucky people had 14,400 baud modems. DSL existed, but wasn't generally available - anyone with something faster than a phone line was more likely to have ISDN or Frame Relay, costing hundreds of dollars a month.

So the site www.phone.net launched, served by a port of NCSA httpd running on an Amiga 3000 at the end of a 9600 baud SLIP line. Since then, it's been hosted at an ISP twice while I moved, the server software has changed three times - including a high-performance (for the time) Amiga-specific server I wrote for the purpose, it's changed to four different internet connection technologies, the hardware has been upgraded at least four times (I lost count - it's been most of two decades!), it's moved four times to three different states and both coasts, and had the domain changed to www.mired.org.

During that time, the HTML for the site has remained largely unchanged. The most significant change was in 1994 when I started consulting, and the site was reorganized to allow sections for the consulting business and individual users. Later, after XML was created and a new version of HTML based on that standard, instead of SGML, appeared, the HTML on the site was tweaked to conform to XHTML instead of SGML with arbitrary, browser-dependent restrictions. In neither case did the basic site design or appearance change.

So - I just shut down an 18 year old web site. That's older than most of the URL's on the web, much less the software and hardware serving them. Anyone know of another site whose design has been neglected for that long that still remains active?

Saturday, February 12, 2011

Repository moved to Google Code.

For those of you wondering about the changed icon in the sidebar - and those of you who missed it - the blog source repository has moved from BitBucket to Google Code. No particular reason, other than it's easy and it's one less password to worry about.

If you already have a clone of the repository checked out, you can switch to the new one by editing .hg/hgrc in the root of the clone, and changing the value of the default parameter from the bitbucket url to https://mired-in-code.googlecode.com/hg/. This was cloned from the original, so a pull now should only bring in updates if you were out of sync with the repository.

Friday, February 4, 2011

Programming aspects of configuring universal remotes

Let's be clear - by "remote", I mean the handheld plastic (usually) devices that come with pretty much every bit of A/V gear you can buy now (my last car stereo came with one) to control the gear from across the room using some kind of I/R code. At first glance, there wouldn't seem to be much programming involved with these things, but they can be surprisingly deep. So lets look at the different types, as more and more of a the skills programmers need get involved in getting them configured.

Universal remotes

All these do is replicate - or try to replicate - the buttons from existing remotes, letting you switch between the type being emulated. These are the very simplest universal remotest, and - with a few exceptions - offer nothing that feels like programming.

The exception are the jp1 remotes. From the outside, this are indistinguishable from other remotes in this group. Inside the battery compartment - or possibly inside the remote once you open it up - you find a jp1 header, which can be used to reprogram the remote from scratch. Beyond assigning commands to buttons, you can load new executable code for the microcontroller in the remote. That makes these the most programmable of all remotes - but they also require the most programming skills to program, with the level varying depending on how fancy you want to get.

Macro remotes

These are just universal remotes with a couple of extra buttons added that can be programmed to replay a sequence of buttons - a macro. The standard usage examples are to turn on all your devices at once, or to turn them all off. Programming them usually involves a finger dance o
n the buttons of the remote, with little or no feedback as to the correctness of what you're doing. While setting these up isn't quite the mental work of real programming, the blind finger dance makes setting these up properly some of the most difficult programming I've ever done, because everything else has much better tools.

Some versions will allow macros to be placed on more than just macro keys, in extreme cases reserving only a few meta buttons like the one that starts the macro programming process.

Some jp1 remotes are in this category.

Learning remotes

These add another set of buttons, which can learn ir commands from other remotes. This allows the user to add device commands that weren't in the remotes library, or even add entire devices that weren't in the library, making them much closer to "universal" than devices without this feature.

The programming techniques tend to be the same as the macro remotes. Most of them allow learning to most buttons. Some older learning remotes didn't include libraries of other remotes, but any you can find today should have such a library. Most of them also have some sort of macro facility. Again, some of the remotes in this category are JP1 remotes.

Some of these remotes support - probably unintentionally - a facility know as "mini macros". These are just multiple ir commands learned as a single command. At least one recent remote has gone so far as to make learning ir commands the macro facility, which makes programming macros more painful that most, and limits macros to just issuing ir commands, not controlling the state of the remote.

These begin to present more interesting programming problems - at least from the UI standpoint. While the better ones have soft buttons - say besides an LCD - that let you change the button label, or even a touch-screen so you can change them all - buttons are typically in short supply. So when programming them, you need to figure out how to assign functionality to existing buttons in a manner you'll remember when you are using the device.

Device-mode remotes

The defining feature of all the categories up to this point is that they are about remote buttons. To issue a command to a device, you have to have it on a button. The commands you can issue come from buttons on an original device remote.

The thing is, most A/V gear made these days has many more commands than you can get to from the remote. For example, whereas your remote may have a command to toggle power on and off, the device probably also understands commands to explicitly turn the power on and off. Likewise, where the remote has a single button to go to "the next input" on an A/V receiver or TV set, there are probably commands to go directly to a specific input. Ditto for pretty much any feature with multiple selections a device might have. Remotes typically don't need these buttons, since if you're sitting in front of the device you can see what state it's in and goes to, and stop when you get to the right one.

If you're programming a macro, you don't have that feedback, which makes the direct commands much more useful. Programming a "power everything off" macro with just the power toggle commands risks turning things back on. Having real power off commands solves that. Writing a macro to power on the TV and DVD player and select the DVD player is much easier if you have a command to select the DVD input on the TV instead of just a command to go to the next input, which requires knowing which input is currently selected on the TV to get right.

What gives them their name is that they distinguish between device modes, which exists to hold device commands, and use modes, which are the only ones the user will normally use once the remote has been properly programmed.

The device modes will have all the commands a device understands (which means they're the only universal remotes I've encountered where I really can throw the original remotes in a drawer and forget them). Typically, it'll have all of the commands for the family of devices at the time the remote was released or last updated, meaning they probably have a lot of commands that don't even make sense for your device, like switching to non-existent inputs. Given that large number of commands, you'll not be surprised to hear that all these remotes have soft buttons so they can display those commands. Device modes are also the modes that can learn new ir commands (which are, of course, associated with the device). These act as a library of commands available to the programmer when they start programming buttons for the use modes.

The only such remotes you're likely to have heard of is the Logitech Harmony line. The rest are noticeably more expensive, and typically sold to professional installers who will charge for programming them. The macro languages in many of them (but not the Harmony) have grown into real programming languages, with variables, conditional statements, loops, subroutines, and all the other paraphernalia beloved of programmers. The company selling them may only be willing to sell you one if you've got an employee who's attended and passed their class on programming them.

The Harmony remotes manage to provide an amazing amount of flexibility with just macros. Part of this comes because the software for configuring them knows about A/V systems: that you need a source for the signal, which may goes through one or more switches, and finally to output devices for audio and/or video. So the first step in setting up a use page (which the Harmony remotes call Activities) is to tell it what devices are used, and how they are connected. This provides basic on and off macros that run when you start or change activities on the remote. You can customize each of these with further commands to any device participating in the activity. Each activity has it's own set of macros, which can play device commands but not other macros. Then each button can be assigned either a device command or a macro.

A DRY Remote

My problem with the Harmony - and every other remote I've run into - is that you wind up repeat yourself for different use pages. For example, if you'd like the play button to turn off the lights for all your media players, each of those needs to be set up individually. You can't set up a macro that turns off the light and then runs the local play command, whatever that might be.

I think a don't repeat yourself solution to this that's usable by non-programmers is possible if you do what Harmony did and leverage the fact that all usable configurations are similar. Start by setting up standard roles for the devices in an activity - source, audio amp, video display. Now provide a global macro facility that can use either those roles or real devices (for light controllers, etc.). These should simply skip any commands from devices or commands that aren't available. This does depend on the various devices that can fill a role sharing command names - at least for the common commands. But that seems to be the case for most devices.

Science, in real life and as depicted on TV

I've been lucky enough during my career to work with not one, but two, world class scientists. My work usually involved the coding required to implement their ideas. I was recently reminded of how scientists were depicted in the media when I was a child, and it's interesting to compare how they are depicted now - at least in the shows that do a good job - with what they're like in real life.

Professor Richard V. Andree - better known for his work on promoting math and computer science education after he retired - was a computer scientist, mathematician and cryptanalyst. As an undergrad, I worked with him on a number of projects in several fields, as well as taking courses from him. The courses had a variety of titles and subjects, but what he really taught - in all of them - was computer-assisted problem solving. I learned a great deal about problem solving from him, and could write about it for pages. I want to mention one lesson - summarized by the quote "It's much easier to prove something after you know it's true"  - because that resonates with something much more recent in my life.

During the past five years or so, I've become enamored of a type of TV show I call the not-a-cop show. They all have the same theme: some expert in a field other than law enforcement winds up partnered with someone in law enforcement to help them solve crimes. There were probably a half-dozen or more on at any given time during the last five years.

What I recall as the first of them - and in my opinion still the best, though no longer in production - was Numb3rs. It involved an FBI agent dragging his mathematical genius brother Charlie into cases to do, well, math. One reason it struck me was that - at least during the early seasons - they explained the math in terms a layman could understand. Even better, the math was real. In some cases, it was work I was already familiar with. In others it wasn't, but if I checked it was real. I didn't check all such cases, but every one I did check was real.

Another reason it struck me was that I could see the lessons I learned from Dr. Andree being applied by the characters in the show. For instance, once Charlie knew something was right, he stuck with it. If the answers came out wrong, he didn't start over - he went looking for the mistake in the work he had done. Clearly, Charlie has clearly taken the quote from Dr. Andree above to heart.

After I graduated, I worked with Dr. Dwight Pfenning, one of the worlds foremost experts in flammable fuels forensics. The work done by this group illustrates how well the not-a-cop show Bones has captured what it's like to do forensic science. Yes, there are differences - since it involves a forensic anthropologist, the Bones group usually has a body, and their problems are figuring out how to extract all the parts of it without destroying any evidence. In dealing with flammable fuels, we mostly dealt with civil and not criminal cases, and in those rare cases where there was a victim, they were maimed instead of dead. More importantly, our material evidence had probably burned or blown up. So instead of trying to preserve it, we were trying to recreate it. Fortunately, we usually had architectural plans and engineering designs for the critical pieces. While in Bones they occasionally got to try and recreate some aspect of the event, that was our normal mode of operation. Recreate whatever had burned or blown up, instrument it, then burn or blow up the recreation to recreate the results of the incident under investigation - and then describe them in detail because we had instrumented things. Where the two overlap, the show feels right. Watching the characters on Bones try and figure out how to measure something reminds me of going through that same process with our tools.

A recent episode of the not-a-cop show The Mentalist had a "scientist" in a guest role. Unlike the previous two shows, the protagonist of The Mentalist is not a scientist. He's, well, a mentalist. He uses his keen observational skills and deep understanding of human nature to unmask the villain, or get them to unmask themselves. The "scientist" depicted in this episode is typical of media depictions of scientists from my childhood, with a "the numbers will show the truth" attitude that science as a whole dropped with the uncertainty principle. Worse yet, she doesn't "do the science." She takes evidence at face value without testing it against her hypothesis, doesn't compare values between alternatives to decide if a conclusion is correct, etc.

It's surprising how right the depiction of scientists at work in the better shows feels when compared to real scientists at work. Especially when I look at how most shows got it so wrong until relatively recently.