Obama for America, the MMO

07 October 2008

Almost everybody’s linked to the Obama ’08 Official iPhone Application by now. With just cause, as well; it’s a nifty, attractive piece of crowdsourced software development that focuses on a single task that’s well suited to mobile – canvassing – and provides the amateur campaigner with the tools to canvas more effectively.

Skimming over the official page, though, I couldn’t help but notice this in the list of features:

See nationwide Obama ’08 Call Friends totals and find out how your call totals compare to leading callers.

So whilst you’re canvassing your friends and recording just how many of them are going to vote for Barack in November, you can also compare yourself to how everybody else using the app is doing. In one click, you’ve got a massively multiplayer high score chart – and of course, you’re going to want to beat everybody else on there, so you go off to canvas some more.

Obama for America just launched an MMO, and nobody noticed.

Jens Alfke writes about the beauty of the $0.99 iPhone Application. I think he makes a reasonable point: when somebody else is taking care of a lot of the overheads of both distribution and payment processing, there are no compelling negatives to developing micro-priced software applications.

You find where download mp3 music on player, You need mp3 music download from online mp3 archive

What kind of application are you going to sell for $0.99? It doesn’t seem like a lot of revenue for something that you might call “useful” – but I don’t think there’s a lasting market for one-dollar “toy” applications.

I think the interesting market that becomes opened when you apply this kind of thinking to a particular platform – namely, the iPhone/Touch interface – is one of selling interface.

A good way of explaining this is with a currency or unit converter application (bear with me).

As it stands, if you want to convert from, say, imperial weight to metric, you can just fire up the iPhone’s built in Calculator application and bang out some arithmetic – as long as you can remember the conversion ratio.

If you don’t know the conversion ratio, you can go online – for free, because you’ve got an airtime contract, or pervasive wifi – and look it up, perhaps even finding a nifty Javascript conversion tool. But that’s a long way round for such a simple task.

Or, I can sell you my $2 weights-and-measures converter. It doesn’t do anything fancy, because we’ve already established that the maths isn’t beyond any potential iPhone user. So the single thing I can sell you on – and the single reason you’d buy my app over the long way around described above – is because of its interface. How easy is it to use? How satisfying? Does it simply a complex task?

The iPhone exposes “interface” as an obvious criteria for purchasing things. Because the interface is so hands-on, so direct, users can easily spot when an interface stinks, or when it’s as easy to use as Apple’s own applications. Given that: let’s make an application where the interface is the primary feature, and the functionality is essentially trivial.

Here’s what my hypothetical weights-and-measures conversion application might look like. You choose what you want to convert to and from via a pair of drop-down boxes at the top of the screen. The drop-down highlights which things on the right-hand side are similar types of measurement to those on the left – so that when you select “metres” on the left-hand side, the units of length on the right-hand side are highlighted. And then, underneath, are two vertical, gradated rules – much like on a slide rule. Above them is the exact value readout; at the centre of the screen is a red marker line. You flick one “ruler” up and down with your thumb, and the other moves in accordance to display the converted value. You can then read exact values out at the top of the screen. If you want to slide horizontally, tip the phone on its side and the accelerometer will tell the software to rotate the screen.

The really neat thing isn’t the conversion at all – it’s just two big rules that you can flick about with your finger. But when you’re out shopping and have only got one hand free, maybe that’s exactly the interface you need. The app is a basic, trivial task, that’s enlivened by a useful interface.

Now, obviously I can’t charge you $10 for this. If I asked for $10, most people would either keep guesstimating weight when they go out shopping, or just use the calculator like they’ve always done. But for $2… it becomes much more of an impulse purchase. You’re not purchasing functionality; you’ve got that already. Instead, you’re putting down $2 for the interface I’ve built.

I’m not saying that interface alone isn’t worth a lot, or that it’s worth $2. Far from it. But taking a task that the user could already do and designing an appropriate, specific interface for it, that makes it pleasurable and immediate to use – that’s worth more than nothing. $1, $2 – as long as it’s less than a coffee, but more than nothing, that’s fine. That’s a business model. Not a complete one, or one to base an entire company off, but a business model nontheless.

The iPhone and iPod touch are devices that thrust their interface and interactions front and foremost. They’ve established within a market full of – in places – terrible interaction design, that it’s OK to pay a premium for devices that work well. People who’ve bought an iPhone or an iPod Touch have already made that premium decision. The iPhone Application Store tells us that it’s OK to pay a smaller sum for software that works well. It doesn’t matter that it’s not premium software, or that the software isn’t sophisticated; what matters is that we’re make money from genuine interaction design, rather than a list of features. That feels like another tiny watershed moment.

I recently worked with Matt Webb on a proof-of-concept for a new interaction pattern for web applications, that we’ve nicknamed Snap. Matt demonstrated this pattern in his closing keynote at Web Directions North. Matt’s presentation, entitled “Movement”, is now online, as is a longer explanation of the Snap pattern at the Schulze & Webb blog.

Given Matt’s side of things is now online, it seemed only right that I share my side of the story.

We’re demonstrating a concept that’s previously been referred to as RSS-I – “RSS for Interaction“. This is an idea Matt mentioned in his ETech 2007 keynote, from Pixels to Plastic, and also in a presentation from Barcamp London in 2006. Here’s Cory Doctorow writing about the first mentions of the idea. Matt’s new name for this pattern is a bit catchier: Snap, which stands for “Syndicated Next Action Pattern”.

If you’ve read those links, it’ll describe a certain pattern for interaction. If you’re lazy, and haven’t read them, in a nutshell: what if RSS feeds could prompt you not only to updated and new content, but also actions that need to be performed?

This is the kind of thing best explained with a demonstration. And so Matt asked me to build a small application – a to-do list program – to demonstrate Snap at WDN. Our application isn’t anything fancy, and it won’t replace your GTD app of choice just yet, but it does demonstrate some of the interactions that Snap affords rather neatly.

You can watch a short screencast of the application here (The application is called “Dentrassi”. For more on that, see this footnote).

In the application, a user can add todo-list items to it, set a priority, and “tag” them as belonging to a project. There are several listing views of the data in the application. The inbox shows all items in progress that don’t belong to a project (ie: aren’t tagged). There are list views for each tag, and also for items that have been deferred to the future. So far, so good.

All of this data is also available as Atom feeds. The Atom feeds present the same information as the website, with one neat difference: at the bottom of every item, there’s a form embedded. And in that form, you can do everything you can do to the item on the site: defer it, tag it, complete it, or trash it.

So not only can you read all the data you’d normally see on the site, you can also interact with it, without leaving your feed reader. When you complete a successful interaction, a big tick appears.

The big tick was something we stubmled upon whilst we were making Dentrassi. If you’re on the web-app side of Dentrassi, and you mark an action completed, you get a typical Rails-style “flash message” letting you know what’s happened. This was also the case in the feed, to begin with – you’d post the form, and then the web page would render inside the feedreader’s viewport. Which is OK, but not great. Then we hit upon the idea of treating requests from feedreaders and browsers differently. There’s no magic user-agent-sniffing – the RSS feeds have an extra hidden field, that’s all. When that field is set, you get a big tick (or a big cross, if you try to work on stale data). You can see in the video that Matt’s added a really simple “add another task link” to the “big tick” page in certain states, to speed up task entry. Once the big tick was in place, it started to feel like I was actually making a thing, rather than a hack.

There’s also an extra feed, which we’ve called the admin feed. This only ever has two items: a generic message telling you the state of the system – how many things are in it, how many are completed – and a form that lets you create a brand-new todo. From your RSS reader.

That’s it. It’s not very sophisticated, but it demonstrates the interaction Matt’s described pretty well: the syndication of interaction, rather than content.

What’s the future for this kind of thing? I don’t know. “Enclosures for interactions” was the best way I could describe one future I’d like for it: the idea that endpoints for interactions could be specified just as we currently specify things like referenced media files; then the user interface for Snap is down to the tool, rather than the feed itself. That’s easily the most exciting future, but it requires standards, and toolmaker support, and people like Tim or Sam to be onboard (or whoever the Tim and Sam of Snap might be), and all that takes time.

(And: when you can let the agent define the interface, what interfaces you could build! I suggested pedals – I can have my yes/no tasks up in a window and rattle through them with my feet whilst I’m reading, or writing email, or whatever, just like foot-controlled dictation machines. Because Snap emphasises, rather than obscures, the natural flow state we get into when we’re working our way down a list, it generates a sense of immediacy around the simple action of “doing tasks”. The forms can be contextual to the actions in question – complete/wontfix, yes/no, attend/watch – whilst the actual interaction the user performs remains the same.)

Snap also demands different kinds of RSS readers. Traditionally, readers cache all information, meaning as items “fall out” of the feed they remain within your feed reader. But we don’t want that; we’d like items that fall out to disappear. A Snap feedreader should be an exact mirror of all the atom feeds available to it, not a partial mirror.

That’s precisely the opposite behaviour of existing, content-oriented feedreaders. Right now, most of what we’ve shown is a little bit of a hack: we’re relying on the fact that, for whatever reason, you can use <form> elements in an Atom feed, we’re relying on this being a local application, for a single user, and we’re relying on it working on a very limited number of user agents (I’ve only tested NetNewsWire and Vienna so far). There’s a way to go before full-scale RSS-I is possible – but there’s nothing to stop people taking a simple, hacky approach right now.

And so that’s what we did. Because a simple, hacky approach that exists beats any amount of RFC-drafting and hypothesising. The most valuable thing we have to show for this so far is that it works.

How it works doesn’t really matter. As such, you’re almost certainly never going to be able to download the source code for this application. The code really isn’t important; it’s not bad at all, but to distribute it would be to miss the point. What we’re trying to demonstrate with this is a particular interaction, and that can be demonstrated through narratives, screengrabs, and screencasts.

That’s all there is to say; Matt’s longer post on his company blog encompasses everything I’ve not mentioned here (and a few things I have), and as such, should be viewed as a companion piece. It’ll be interesting to see what happens from here – how, as things like Action Streams take hold, patterns like Snap have a growing place in the web ecology. It’ll also be interesting to see what happens with, say, standards for these kinds of things – enclosures and the like – and how the tool manufacturers react. All in all, it was a fun project to work on, and I hope other people find the interaction as exciting as Matt and I do.

(Matt mentions that I nicknamed that application “Dentrassi”. I find it useful to have names for things; when I’m sitting at ~/Sites/ and am about to type rails foo to kick off a new project, it’s nice to have something – anything – to call the application. I thought about DEmonstrating RSSI, and the only word in my head that had things in the right order was DEntRaSSI. The Dentrassi, for reference, are an alien race from the Hitch-Hiker’s Guide to the Galaxy. I’m not a Douglas Adams nut, or anything – it was just the only word in my head at the time. So rails dentrassi it was, and off we went.)


18 November 2007

Time for my second post about the Epson R-D1, which I was lucky enough to play with when my colleague Lars bought one recently.

Along the top surface of the camera is what looks like a film-advance lever: the winder you crank to move to the next shot on a film camera. Obviously, there’s no film to advance on the digital camera. But the lever still serves its other traditional purpose: it re-cocks the shutter for another shot.

I’ve marked it in the photograph below.

Initially, I thought this was another of the R-D1’s ersatz “retro” features. After all: there’s no real need for such functionality. Even the Leica M8 abandons the film-advance lever. But once I used the camera, the lever made sense to me.

Firstly: it’s somewhere to rest your thumb. That may sound like a silly thing to say, but if you’ve ever used a rangefinder, or an old SLR with a slim body and no moulded grip, the lever becomes a useful way to counterbalance the body in your hand. It’s nice to have that familiar anchor-point to rest on.

But far more importantly than that: it makes the act of taking a photograph more considered. It brings to mind one of my favourite quotations about photography, from Ansel Adams:

“…the machine-gun approach to photography is flawed… a photograph is not an accident; it is a concept.”

I love that. Photographs are not something that is taken; they’re something that is made. An image is considered, composed, and then captured. And the life of that image ends there. To take another, you must re-cock the shutter, and start again.

And so the shutter-cocking lever makes the very act of making a photograph with the Epson more deliberate. That “ersatz” retro touch is actually fundamental to the way the camera demands to be used. As a result, you end up taking fewer photographs with the Epson – there’s none of the mad “double-tapping” that sometimes becomes habit with a DSLR. It feels more genteel, more refined – and I think the pictures you end up making with it are all the better for that.

Swings and readouts

07 November 2007

My colleague Lars has just bought an Epson R-D1. If you’re not aware of it, it’s a digital rangefinder (roughly modelled on a Voigtlander) that takes Leica M Bayonet lenses, is hard to find, and noticeable cheaper than a Leica M8.

Epson R-D1

It’s obviously a niche camera: M lenses aren’t common nor cheap, the rangefinder is hardly a mass-market camera paradigm these days, and it’s largely manual – aperture priority, manual focus.

One thing that really caught my eye – and that I initially dismissed as ersatz Japanese retro-fetishery – was the readout on the top. Which looks like this:

To explain: the largest hand, pointing straight up, indicates how many exposures are left on the current memory card. As you can see, the scale is logarithmic – 500+ is the maximum, and as it counts down, the number of remaining exposures is measured more accurately.

The E-F gauge at the bottom measures not fuel, but battery power.

The left-hand gauge indicates white balance – either auto or one of several presets.

Finally, the right-hand dial represents the image quality: Raw, High, or Normal.

Once you know what it means, it’s a wonderfully clear interface: your eye can scan it very quickly. It’s also hypnotic watching it update. To alter the image quality, for instance, you hold the image quality lever with your right hand and move the selection knob (positioned where the film-rewind would be on a Leica) with your left. As the quality alters (and the rightmost needle flicks to the appropriate setting), the exposures-remaining needle swings around to reflect the new maximum number of pictures.

You can’t always see the benefits of analogue readouts in still photographs; this one is a case in point. Once it starts moving – and you start having a reason to check that readout – their clarity becomes immediately obvious.

So whilst I may have thought this kitsch to start with… it turns out to be one of my favourite features on the camera.

(As for that manual “film advance” lever… I’ll write about that in another post. It’s something I found similarly kitschy to begin with, but understood in the end.)