One thing I usually forget to do when I backup a computer is back up my MySQL databases. Partly, because they’re not stored in my Library (I don’t think); partly because I forget how many I have. mysqldump only backs up one database at a time, unfortunately. What would be great is something that dumps all of the databases in the system.

Anyhow, whilst on hold to my ISP this morning, I decided to solve this problem once and for all.

The end result is a pair of Ruby scripts which you can get from github.

The first will iterate over every db on your system (when run with an appropriate username and password) and spit out a .sql file with a filename corresponding to that database. The second look at a folder of .sql files named similarly, and for each one, drop a databases with that name, re-create it, and restore from the .sql file.

I’m sure I could do it just fine in a bash script, but it made sense to use the tool that comes most quickly to my hands, and that means Ruby. Once you’ve got Ruby installed, the rest is easy. Clone them, patch them, fix them; they’re basic, as maintenance goes, but handy.

Get the scripts from github.

A while back I mentioned that the iPhone App Store was a place where we could see people paying for interface alone, regardless of functionality.

This is a useful segue into Daniel Jalkut’s commentary on the nature of independent software development, and, specifically, whether small-software should be free-as-in-beer software. Jalkut makes the point, as an independent developer, that you should support the software you like, regardless of how slight it is. The example he refers to is Pukka, a nice little tool for posting to delicious from OSX. Pukka is nice because it’s always available and it’s very Mac-like in its behaviour. It’s pretty cheap at $12.95.

Jalkut takes exception to Leo Laporte’s commentary in a MacBreak Weekly podcast, where he suggests (as he tells us how much he loves Pukka) that it should be free.

Why did he suggest this? The answer, simply, is that Pukka is an interface to someone else’s functionality rather than a tool in its own right.

To wit: Pukka interfaces to delicious through the delicious API. Most of the hard work of social bookmarking has already been implemented by the delicious time. All Pukka does is talk to the API – it’s a menubar item, an interface, and a window that sends data to the API. Not a product on its own. Of course, if you know anything about development, you’ll know that building things that talk to APIs – on the desktop, on the web, wherever – isn’t always as easy as it sounds. $14.95 seems reasonably to pay for an app that does this well, especially if you use delicious as much as (eg) I do.

Jalkut’s own MarsEdit (which I’m using a licensed copy of to write this) is similar. It’s a $29.95 weblog editor, that interfaces with most popular blogs, and lets me write posts on my Mac desktop. It’s not that I couldn’t write blogposts before; I can always log into WordPress to do that. No, the reason I bought this is because of the convenience and quality. I rather like posting from this fluid, offline interface, rather than having to type into a box in Safari, for various reasons – the quality and speed of preview, the simplicity of media integration, and the multi-blog (and API) support – I use MarsEdit to post to both WordPress and LiveJournal. If I couldn’t spare $30, I could always just blog from the existing admin screens, but I felt the product was so good I should be it.

Sometimes, it’s hard to express to people the value of a product that does something you could already do. A product that does something new, or which is an essential tool, is much easier to justify. Many Mac owners I know didn’t hesitate to pay the €39 for TextMate, because text editing is so fundamental to our work. But $30 on a blogposting client? That one requires more thought, and isn’t such a no-brainer.

I’m not sure what the solution is. It’s a shame that it’s harder to express the value of “service” applications; I think the iPhone might have it better off here, simply because the device itself is so unlike traditional clients that it makes sense to redesign interfaces to services for it. In the meantime, it’s worth remembering that a quality interface to an existing product might still be worth something, however small, and it’s for that reason that developers like Jalkut should be rewarded for their work.

I recently worked with Matt Webb on a proof-of-concept for a new interaction pattern for web applications, that we’ve nicknamed Snap. Matt demonstrated this pattern in his closing keynote at Web Directions North. Matt’s presentation, entitled “Movement”, is now online, as is a longer explanation of the Snap pattern at the Schulze & Webb blog.

Given Matt’s side of things is now online, it seemed only right that I share my side of the story.

We’re demonstrating a concept that’s previously been referred to as RSS-I – “RSS for Interaction“. This is an idea Matt mentioned in his ETech 2007 keynote, from Pixels to Plastic, and also in a presentation from Barcamp London in 2006. Here’s Cory Doctorow writing about the first mentions of the idea. Matt’s new name for this pattern is a bit catchier: Snap, which stands for “Syndicated Next Action Pattern”.

If you’ve read those links, it’ll describe a certain pattern for interaction. If you’re lazy, and haven’t read them, in a nutshell: what if RSS feeds could prompt you not only to updated and new content, but also actions that need to be performed?

This is the kind of thing best explained with a demonstration. And so Matt asked me to build a small application – a to-do list program – to demonstrate Snap at WDN. Our application isn’t anything fancy, and it won’t replace your GTD app of choice just yet, but it does demonstrate some of the interactions that Snap affords rather neatly.

You can watch a short screencast of the application here (The application is called “Dentrassi”. For more on that, see this footnote).

In the application, a user can add todo-list items to it, set a priority, and “tag” them as belonging to a project. There are several listing views of the data in the application. The inbox shows all items in progress that don’t belong to a project (ie: aren’t tagged). There are list views for each tag, and also for items that have been deferred to the future. So far, so good.

All of this data is also available as Atom feeds. The Atom feeds present the same information as the website, with one neat difference: at the bottom of every item, there’s a form embedded. And in that form, you can do everything you can do to the item on the site: defer it, tag it, complete it, or trash it.

So not only can you read all the data you’d normally see on the site, you can also interact with it, without leaving your feed reader. When you complete a successful interaction, a big tick appears.

The big tick was something we stubmled upon whilst we were making Dentrassi. If you’re on the web-app side of Dentrassi, and you mark an action completed, you get a typical Rails-style “flash message” letting you know what’s happened. This was also the case in the feed, to begin with – you’d post the form, and then the web page would render inside the feedreader’s viewport. Which is OK, but not great. Then we hit upon the idea of treating requests from feedreaders and browsers differently. There’s no magic user-agent-sniffing – the RSS feeds have an extra hidden field, that’s all. When that field is set, you get a big tick (or a big cross, if you try to work on stale data). You can see in the video that Matt’s added a really simple “add another task link” to the “big tick” page in certain states, to speed up task entry. Once the big tick was in place, it started to feel like I was actually making a thing, rather than a hack.

There’s also an extra feed, which we’ve called the admin feed. This only ever has two items: a generic message telling you the state of the system – how many things are in it, how many are completed – and a form that lets you create a brand-new todo. From your RSS reader.

That’s it. It’s not very sophisticated, but it demonstrates the interaction Matt’s described pretty well: the syndication of interaction, rather than content.

What’s the future for this kind of thing? I don’t know. “Enclosures for interactions” was the best way I could describe one future I’d like for it: the idea that endpoints for interactions could be specified just as we currently specify things like referenced media files; then the user interface for Snap is down to the tool, rather than the feed itself. That’s easily the most exciting future, but it requires standards, and toolmaker support, and people like Tim or Sam to be onboard (or whoever the Tim and Sam of Snap might be), and all that takes time.

(And: when you can let the agent define the interface, what interfaces you could build! I suggested pedals – I can have my yes/no tasks up in a window and rattle through them with my feet whilst I’m reading, or writing email, or whatever, just like foot-controlled dictation machines. Because Snap emphasises, rather than obscures, the natural flow state we get into when we’re working our way down a list, it generates a sense of immediacy around the simple action of “doing tasks”. The forms can be contextual to the actions in question – complete/wontfix, yes/no, attend/watch – whilst the actual interaction the user performs remains the same.)

Snap also demands different kinds of RSS readers. Traditionally, readers cache all information, meaning as items “fall out” of the feed they remain within your feed reader. But we don’t want that; we’d like items that fall out to disappear. A Snap feedreader should be an exact mirror of all the atom feeds available to it, not a partial mirror.

That’s precisely the opposite behaviour of existing, content-oriented feedreaders. Right now, most of what we’ve shown is a little bit of a hack: we’re relying on the fact that, for whatever reason, you can use <form> elements in an Atom feed, we’re relying on this being a local application, for a single user, and we’re relying on it working on a very limited number of user agents (I’ve only tested NetNewsWire and Vienna so far). There’s a way to go before full-scale RSS-I is possible – but there’s nothing to stop people taking a simple, hacky approach right now.

And so that’s what we did. Because a simple, hacky approach that exists beats any amount of RFC-drafting and hypothesising. The most valuable thing we have to show for this so far is that it works.

How it works doesn’t really matter. As such, you’re almost certainly never going to be able to download the source code for this application. The code really isn’t important; it’s not bad at all, but to distribute it would be to miss the point. What we’re trying to demonstrate with this is a particular interaction, and that can be demonstrated through narratives, screengrabs, and screencasts.

That’s all there is to say; Matt’s longer post on his company blog encompasses everything I’ve not mentioned here (and a few things I have), and as such, should be viewed as a companion piece. It’ll be interesting to see what happens from here – how, as things like Action Streams take hold, patterns like Snap have a growing place in the web ecology. It’ll also be interesting to see what happens with, say, standards for these kinds of things – enclosures and the like – and how the tool manufacturers react. All in all, it was a fun project to work on, and I hope other people find the interaction as exciting as Matt and I do.

(Matt mentions that I nicknamed that application “Dentrassi”. I find it useful to have names for things; when I’m sitting at ~/Sites/ and am about to type rails foo to kick off a new project, it’s nice to have something – anything – to call the application. I thought about DEmonstrating RSSI, and the only word in my head that had things in the right order was DEntRaSSI. The Dentrassi, for reference, are an alien race from the Hitch-Hiker’s Guide to the Galaxy. I’m not a Douglas Adams nut, or anything – it was just the only word in my head at the time. So rails dentrassi it was, and off we went.)

A pet peeve of mine is the lack of a documented shortcut in Ruby’s #strftime to function to return the hour of the day, in twelve-hour clock, without a leading zero. To wit:

puts Time.now.strftime("%I:%M") # >> 03:29

That’s not particularly attractive. I could strip the leading zero with some string manipulation, but this is getting sledgehammer-ish to crack a nut. Fortunately, this works:

puts Time.now.strftime("%l:%M") # >> 3:29

That’s a lowercase L in the formatting string, which returns the number of hours in a twelve-hour clock sans leading zero. Result! And yes, that’s undocumented everywhere I’ve looked. Thanks to my colleague Colin for pointing that trick out.

Now, if only I could get it to return am/pm without having to call #downcase

Connecting Rails to legacy databases isn’t actually that hard – depending on the database you start out with. Recently, we needed to perform some statistical analysis on a large Movable Type database. Rather than wrestling with endless SQL queries at the prompt, it made sense to abstract out a little and build some kind of modelled front end to the statistics.

The most obvious tool for me to use was Rails; I’m familiar with it, I like the way ActiveRecord works, and it means that I can poke around the database from script/console if I need to.

The reason this turned out not to be too hard is because whilst Movable Type doesn’t conform to Rails’ opinionated ideas of what a schema should look like, it is still a well-designed and normalised database. Because of this, we can teach ActiveRecord to understand the database.

First of all, we start by creating our models: for our needs, Blog, Comment, Post and Author. We generate them in the usual manner – script/generate model blog. Once we’ve done that, we delete the migration files in db/migrate it creates, because we’re not going to use them.

Next, we point config/database.yml to the Movable Type database.

And then, we build our relationships thus:

class Blog < ActiveRecord::Base
  set_table_name "mt_blog"
  set_primary_key "blog_id"

  has_many :entries, :foreign_key => "entry_blog_id", :order => "entry_created_on"
end

class Entry < ActiveRecord::Base
  set_table_name "mt_entry"
  set_primary_key "entry_id"

  has_many :comments, :foreign_key => "comment_entry_id"
  belongs_to :blog, :foreign_key => "entry_blog_id"
  belongs_to :author, :foreign_key => "entry_author_id"
end

class Comment < ActiveRecord::Base
  set_table_name "mt_comment"
  set_primary_key "comment_id"

  belongs_to :entry, :foreign_key => "comment_entry_id"
end


class Author < ActiveRecord::Base
  set_table_name "mt_author"
  set_primary_key "author_id"

  has_many :entries, :foreign_key => "entry_author_id"
end

The set_table_name method tells the ActiveRecord class what table to look at, and the set_primary_key method does exactly what it says on the tin. (It also makes sure that #to_param works correctly based on whatever our new primary key is, which is handy). Beyond that, we simply have to specify the foreign keys on our relationships and everything plays ball; we can now access blog.entries just as we do with a typical Rails setup. It’s now easy to write the rest of our Rails app – model methods, controllers, views – just as we normally would. We’re just using the MT database to pull out our data.

And if you’re wondering: yes, it made the manipulation a lot easier, and a few hours poking at the console began to yield some interesting algorithms to apply.

Velocity bundle for TextMate

28 September 2007

Well over a year ago, I mentioned that I was working on a Velocity bundle for Textmate. Or, to be more precise: I mentioned that I’d already written one that we were using at NPG.

A year later, I’m ready to release the bundle; you can get it from its Google Code site. But before you go there, an explanation for the delay is in order – and on the way, I’ll tell you about how the bundle was written.

Continue reading this post…

So, this blog (for its sins) is running on WordPress 2.0.5. That’s a bit out-of-date. The main reason is because it has all sorts of jiggery-pokery to make it work the way I want – a tagging solution based on Jerome’s Keywords that was modified when I moved to 2.0; all sorts of template hacking to make the beautiful breadcrumb trail at the top you see work.

I’ve resisted upgrading due to the hell that was hacking plugins and templates into future versions of WordPress. Until now, that is. WordPress 2.3 (finally) introduces a proper tagging solution – entirely separate to the “categories” system. Well, not quite, as we’ll see – but it finally means that the architecture of Infovore.org is now entirely possible within WordPress itself.

Of course, now you’ve got to convert your custom tagging solution to the new schema. I’ve written a small script to do this for myself – only took about an hour, and that’s mainly because I was exploring the schema, and my PHP is a little rusty. Of course, now I know a reasonable amount about how tagging is implemented in WordPress 2.3, and felt I should write this up properly, so that anybody else converting custom tagging solutions might save themselves some time.

Continue reading this post…

It’s fairly common practice in jQuery to bind events to a quick anonymous function that performs the action you desire. But how do you bind an event to a non-anonymous function – something that you’ve already extracted into a function so as to avoid reptition? I learned this one the hard way, and thought it was worth sharing.

I was working on some jQuery to apply to some legacy HTML at work, and was fiddling around to see if our goal was possible.

I was working on a “springy” archive menu. There are some headers and some lists; the lists are hidden on pageload, and when you click on a header, the subsequent list (ie the archive for that year) opens. Many lists can be open or closed at once. In addition, we toggle a class on the header (rather than the header link, which is there purely out of legacy code for now) to change an image, indicating that it’s open.

Here’s my proof of concept code – my first working version to demonstrate how this functionality would work.

$(document).ready(function() {
	var headlinks = $("div.container h3 a");

	for (var i=0; i < headlinks.length; i++) {
		$(headlinks[i]).bind("click", function() {
			$(this).parent().next().toggle();
			$(this).parent().toggleClass("open");
		});              
		$(headlinks[i]).parent().next().toggle();
		$(headlinks[i]).parent().toggleClass("open");
	};

	toggleArchiveLinks(headlinks[0]);
	$(headlinks[0]).parent().next().toggle();
	$(headlinks[0]).parent().toggleClass("open");
});

Now we've got the code working, it's time to refactor. There's a lot of repetition going on - three instances of that "toggle visibility and toggle class" behaviour, so let's pull that out into a function:

function toggleArchiveLinks(element) {
	$(element).parent().next().toggle();
	$(element).parent().toggleClass("open");
	return false;
}

Much better. Unfortunately, we can't replace our anonymous function within that bind directive with this. I initially thought you could bind "click" to toggleArchiveLinks(this). But that doesn't work, because in the context of events, what gets passed out to another function is the event object itself. (I think it works fine in the anonymous object due to the way things are scoped).

But it's a bit ugly to refactor some, but not all of the code. After looking at the jQuery docs for bind, it turns out that there's a third parameter you can pass in: a data object. This is made available to a handler function. So that means we can pass information about the element we want to toggle to a handler event. We write our new bind directive like this:

$(headlinks[i]).bind("click", {element: headlinks[i]}, handleToggleEvent);

That object in the middle will be made available to a new function handleToggleEvent. (We could, of course, pass as many key/value pairs as we wanted to to the function). We also need to write handleToggleEvent. That function looks like this:

function handleToggleEvent(event) {
	toggleArchiveLinks(event.data.element);
	event.preventDefault();
}

The function accepts an event as a parameter, and the object/hash from our bind statement is available as event.data. We're then free to call toggleArchiveLinks on the element of our choosing. Finally, we have to call event.preventDefault in order to stop the event propagating any further. If we don't do this, the bound behaviour will happen, and then the link will click through as normal. return false; won't work here, because we're actually dealing with the event itself, not just an anonymous function.

So we've now managed to refactor some repetitive code and call it from a bind statement. Our final jQuery script looks like this:

$(document).ready(function() {
	var headlinks = $("div.container h3 a");

	for (var i=0; i < headlinks.length; i++) {
		$(headlinks[i]).bind("click", {element: headlinks[i]}, handleToggleEvent);                
		toggleArchiveLinks(headlinks[i]);
	};
	toggleArchiveLinks(headlinks[0]);
});

function toggleArchiveLinks(element) {
	$(element).parent().next().toggle();
	$(element).parent().toggleClass("open");
	return false;
}

function handleToggleEvent(event) {
	toggleArchiveLinks(event.data.element);
	event.preventDefault();
}

Which is much better, I think.

I’ve just had my first patch accepted on an open source project. Quite chuffed with that! As of this weekend, the Rails calendar_helper plugin is now at version 0.21. My changes are very minimal, and only really to do with the markup.

Firstly, the default table markup’s had an overhaul. The date now goes into a %lt;caption> tag, outside the <thead>, as is appropriate. The <th>‘s in the thead now have scope="col" applied to them, again, as is appropriate.

The only other change is optional. If you pass in an option of :accessible => true, any dates in the grid which fall outside the current month will have <span> class="hidden"> monthName</span> appended to them. It could be reasonably inferred that plain numbers are dates that relate to the caption of the table, but the numbers outside the current month should probably be clarified.

You can come up with your own method of hiding content marked as .hidden; at NPG, we use the following:

.hidden {
	position:absolute;
 	left:0px;
 	top:-500px;
 	width:1px;
 	height:1px;
 	overflow:hidden;
}

but really, use whatever you’re most comfortable with.

You can get the plugin from Geoffrey Grosenbach’s subversion:

http://topfunky.net/svn/plugins/calendar_helper/

via the usual Rails plugin installation method.

There comes a point in every developer’s life when your realise the problem isn’t your work, but the tools you’ve got to hand. Toolsmithery is an important part of the job, and so I spent a few hours yesterday crafting a tool useful to any front-end developer.

The result is the CSS Redundancy Checker.

When you’re writing HTML, over time, your CSS files begin to fill up a lot. If you’re working on a large project, you might even end up with several people contributing to the CSS file, not to mention refactoring each other’s work. The result is a directory full of HTML files, and a very large CSS file.

What tends to happen is that not ever selector in the CSS file actually applies to your HTML; many are rendered redundant by refactoring, or by changes in HTML. But when you’ve got a 70k+ CSS file, it’s not easy to check precisely which selectors aren’t in use any more.

Enter the CSS Redundancy Checker. It’s a very simple tool, really. You pass in a single css file, and either a directory of HTML files, or a .txt file listing URLs (one to a line). It then proceeds to look at each file in turn, and at the end, list all the selectors in your css file that aren’t used by any of the HTML files.

That’s it. I’m pretty sure it’s accurate, and it should work with most CSS files. Most of the magic isn’t down to me, but down to _why the lucky stiff‘s marvelous Hpricot HTML parser. The script itself is about 50 lines of reasonably tidy Ruby. You’ll need Ruby, and Hpricot, in order to run it. There’s more full documentation over at the Google Code site where I’m hosting it. Please add any issues there, or get in touch if you want to contribute.

Things it doesn’t do: listing line numbers of where the selectors are. I wrote that functionality on the train this morning, but I can’t find a way to make it 100% accurate, so thought it best to leave it out – inaccurate line numbers are of no use to anyone. If you can come up with a way that works, let me know. Also, at some point I might turn it into a Textmate command. All in good time, though.

The need for the tool came out of a large project we’re working on at NPG, but I felt it would be useful to pretty much any HTML developer. So I’ve released it to the world. Let me know what you think, and do spread the word. You can get it via svn checkout, for now:

svn checkout http://css-redundancy-checker.googlecode.com/svn/trunk/ css-redundancy-checker