So, here’s a thing I’m making.

My Nikon D90 can be triggered by the cheap ML-L3 IR remote. It costs about £15. You point it at the camera, push the button, and it takes a picture.

This remote works with everything from the D90 down (so, towards the D3100/D40 end of the line).

What these cameras don’t have built in, however, is an intervalometer: a timer that will make the camera take a picture every n seconds or minutes. (The D300 and up (and, I believe, the new D7000) have a built-in intervalometer.)

I thought it might be interesting to build one. The project had a few criteria:

  • It couldn’t be hard-wired to a computer. It had to be a stand-alone, battery-powered device
  • It had to have a half-decent way of controlling it; ideally, not just stabbing at buttons.
  • I wanted it to have a 16×2 LCD screen, mainly because I wanted to both design for that constraint, and work out how to control said screen
  • Ideally, it wouldn’t require taking apart an ML-L3 remote to build.

Here’s where we are:

End-to-end: it works. Note that I said “making” earlier, though: it’s still not finished, because it’s not packaged. And whilst packaging is difficult, I think that’s what’ll make it feel finished for me: a black box I can easily take into the field.

You turn it on, rotate an encoder to set a time, and click the encoder in to arm it. Hold the encoder to disarm. The time varies from 1 second to 15 minutes – after 90 seconds, it increases in minute chunks. (15 minutes is the maximum time the D90 will stay on before powering down).

Most people ask me why it says “SAFE” and “ARM”. Well, it sounds a bit threatening, but I genuinely believed that OFF and ON were inaccurate, in that the device is “on” if the screen’s on. So I was just referring to the state of the timer. And something that fitted into four characters would work well with the layout of the screen I’d chosen.

How it works

There’s very little componentry here, but each section of the Intervalometer was a neat little thing to learn on its own.

First, the IR trigger itself. Nikon’s IR remote is relatively simple: a button, some circuitry, and an IR LED. Pushing the button doesn’t just light up the IR LED; it fires a very short “coded” burst of light, so that the only thing it’ll trigger will be a Nikon camera.

Fortunately for our needs, there’s an Arduino library called NikonIRControl, which emulates that coded burst in software – so a single command will send the appropriate burst to a digital output pin. That’s our IR trigger sorted, and all we’ve had to buy is an IR led. Which feels better – and cheaper – than just soldering two wires to a Nikon remote.

The screen is a 16×2 LCD, with a serial “backpack” pre-attached. That means I can just send serial data to it over a single wire, which again, keeps the number of wires from the Arduino down. I’m using the NewSoftSerial library to talk to it, which makes life easy.

The main controller is a quadrature rotary encoder with a push-switch in it. The switch is easily read, like any momentary push-switch on an Arduino. The encoder is a little trickier, because it’s encoded. In the end, I read it off an interrupt, using code from this page – and then smoothed it out a bit by making it only read every other click.

Finally, there’s just the case of the timer. Timers are a bit more fiddly than I’d have liked. You can’t just use delay, because that delays all code on the chip. I tried doing various things including counting milliseconds, but in the end, relied on the TimedAction library, which works well enough, and does a similar thing without my broken code.

Once each piece was in place and working, it was just a case of pulling it all together. The code – which is available on Github – is broken down into a series of files, pretty much one for each section of the project. I found this much easier to manage than the tyranny of One Big File.

For piecing the hardware together, I built a simple “shield” out of a piece of Veroboard. I got a lot of laughs when I said I was using veroboard, but it worked very well for me. With some headers soldered in, it was quite easy to line it up with the Arduino – making it easily changed, but also swappable. The usual electronics-debugging issues aside, this went fairly smoothly, and it only took a battery pack to give me a portable – if fragile – working intervalometer.

What’s next? Obviously, packaging it up – something sturdy and black, with an obvious power switch and that big knob. I was considering moving it to an Arduino Mini, for size reasons, but am not sure I can face more electronics debugging. Similarly, I’m not sure I’ll build a dedicated PCB or anything like that, yet.

But: if I can get this lot into a box, that’ll be good. Also: I should take some timelapses with it.

So whilst it’s not what I would call “finished”, it is an end-to-end demo – and that feels like good enough to share with the world.

(And, of course: if you’d like to use – or build on – my code, you’re more than welcome to.)

Something on my mind grapes

26 September 2010

Silly 30-minute weekend project: this led to this. It usually raises a smile, but every now and then (as above) it’s solid gold.

(Also, I found a neat pattern for decoupling all the OAuth keys so that it’s much easier to distribute/opensource the sourecode, which I’ll probably do at some point).

It’s been around the internet a bit already, but I can now show you what I’ve been working on for much of the time since I joined Schulze & Webb.

Enter Shownar:

What’s Shownar? Matt explained it over at Pulse Laser, the Schulze & Webb blog:

Shownar tracks millions of blogs and Twitter plus other microblogging services, and finds people talking about BBC telly and radio. Then it datamines to see where the conversations are and what shows are surprisingly popular.

And over at the BBC Internet Blog, Dan Taylor quotes the about page:

First, it will help you find shows that others have not only watched, but are talking about. Hopefully it’ll throw up a few hidden gems. People’s interest, attention and engagement with shows are more important to Shownar than viewing figures; the audience size of a documentary on BBC FOUR, for instance, will never approach that of EastEnders, but if that documentary sparks a lot of interest and comment – even discussion – we want to highlight it. And second, when you’ve found a show of interest, we want to assist your onward journey by generating links to related discussions elsewhere on the web. In the same way news stories are improved by linking out to the same story on other news sites, we believe shows are improved by connecting them to the wider discussion and their audience.

Of course, I didn’t work on this alone; as Matt points out, there was a good-sized team from both the BBC and Schulze & Webb, and it was great to work with so many talented and sharp people, all of whom have left their mark on the project.

People have been pretty enthusiastic so far, which is always nice to see. It’s also great been watching stories emerge – stories of what we found to watch or listen to in the office, ways our viewing and listening habits have changed, and there’s not much better praise than constantly wanting to use a thing you’ve made.

So there we go, Shownar. Another thing in the world.

CodeIgniter really is turning out to be The Little PHP Framework That Could. I’ve now dived pretty deep into it and still have few complaints; as I’ve said before, it makes all the boring stuff easy, has almost no “magic”, and stays out of the way.

As the application moves towards production, though, I began to miss a few things from Rails – notably, its ExceptionNotifier plugin. ExceptionNotifier will send you an email every time there’s an error on the site, which is really very useful with production applications.

So I investigated alternatives for CodeIgniter. I stumbled across this Stack Overflow post, which basically outlines exactly what I was looking for.

Except it doesn’t work.

Never mind! We can fix that, and the end result is MY_Exceptions.php:

(You might want to “view raw” on that – there’s some funky syntax-highlighting going on).

This really does work out-of-the-box with CodeIgniter 1.7.x. You just drop it into system/application/library, call it MY_Exceptions.php, and it’ll extend the existing Exceptions library. Obviously, you’re going to need to change a lot of the obvious details like email addresses you want things sent to, and the name of the production domain you’ve configured in your app’s config.php. You also need to make sure the error log level is set to “1” or higher in that config.php file. But that’s about it; it really does work, and means that in production alone, you’ll get email from your app when a PHP error gets thrown, along with not only the line number and file the error was thrown in, but the URL that the user was accessing to generate the problem.

Not bad for an hour’s work. And, because it’s a Gist, you can either copy and paste, or just clone it straight into your application.

You can now find out what Schulze – or anyone else, for that matter – is listening to (as described in this post) on the web; just head on over to http://wotlisten.heroku.com.

The utility of the original command-line script is now diluted even farther – mainly because you now have to go to the website to scrape the web – but that wasn’t really the point of putting wotlisten online; the point was to see just how easy deploying to Heroku really was.

The answer is: remarkably so. I wrapped the original script into a little Sinatra application, with two views, and a tiny bit of error handling for convenience. Sinatra’s something I’ve been playing with for a while now: it’s really excellent for wrapping small scripts into little webapps with the bare minimum of extra code, and when combined with lightweight tools like DataMapper, and sqlite, just powerful enough for the lightweight tinkering I seem to do so much of. If you’re a Ruby developer and you haven’t played around with Sinatra, you owe it to yourself to check it out – it’s a lovely library to have in the toolbox.

With the webapp written, I installed the heroku gem, which helped me create a new remote git branch pointing at my Heroku account. Deployment is trivial – far simpler than using something like Capistrano; all that is necessary is to push my master branch to the heroku remote, and upon a successful push, Heroku notices that I’ve pushed out a Rack application – and it directs requests to it automatically.

It took about ten minutes to write the Sinatra app, and another ten to get it up and running on Heroku; the single snag I ran into was the same as Tom did – the need to unpack haml into a vendor directory.

I’m very, very impressed. It’s all very well being able to build small, trivial toys like wotlisten, but it’s often a hassle to deploy or configure them. Heroku really takes most of that pain away, and makes setting a tiny Sinatra app live a trivial task. It’s definitely going into my toolbox – or, perhaps, that should be toybox – for the near future.

So we listen to music on speakers – not headphones – quite a bit in the studio. Or at least Jack does, because they’re in his batcave.

And sometimes, I’m not sure what’s playing from next door, but I know I like it – and it’d be good to know what it is. Fortunately, Jack mainly listens to last.fm radio (and even if it’s not radio, his iTunes would still be scrobbled).

So I wrote wotlisten.rb. You can see it (and get it) as a gist on github. It doesn’t use audio recognition, or the last.fm API, or RSS; it uses plain-old screen-scraping.

(Somewhere near the top of my list of coding tools is Hpricot, because it’s a lovely HTML parser that you can scrape with as fast as you can think. Or, at least, as fast as you can write selectors. That was the case here.)

So: you throw in a username, and wotlisten.rb tells you what they’re listening to. Or what they were last listening to. It doesn’t distinguish between the two – and why should it? This is Situated Software at its most useful: I assume you can hear the music that’s playing, and that you know the last.fm tag for the user playing it (and: until very recently, I assumed that person was Jack Schulze; this updated 2.0 release lets you pass in any username).

It’s unremarkable code in the extreme, but notable for the fact it took ten minutes to bang it out; it came out as fast as I could think it. I’m getting to the point where, especially with Hpricot and similar, this kind of tools is second-nature to write. It’s taken a long while to get there, though.

The script proved useful upon several occasions that day. More to the point, it paid for itself handsomely a few hours later, when we discovered that Schulze was playing Bonnie Tyler’s Holding Out For A Hero.

Charles Arthur recently wrote that if [he] had one piece of advice to a journalist starting out now, it would be: learn to code.

I understand the point he’s making, but I think there’s a further degree of subtlety to the argument. After all, learning to code is hard. Learning to glue together bits of scripts, and later bash your way into scripting langauges really is useful, but even that isn’t easy. It requires you to learn to translate intent into code, to know what’s possible, to know what’s easy and what’s hard, and to know what to do when third-party things you’re glueing together don’t work.

In short: it’s really easy to make a mess, and a mess that was difficult and stressful at that.

So my advice would be somewhat different, and apply to both those journalists who find code easy, and those who find it impossible:

Learn to think like a programmer.

What’s really important is to not understand how to do magical things with code, but to learn what magical things are possible, what the necessary inputs for that magic are, and who to ask to do it.

Identify the repetitive tasks that computers are good at. Yes, they’re good at find-and-replace, but tools like regular expressions are even handier, and I’m amazed how few people understand that find-and-replace is the beginning, not the end, of text processing. (And yes, I’m aware that regex are a quick way to give yourself two problems.)

Computers are really good at processing regular data, and they are really, really good at repetitive tasks. Every time I watched someone in an office doing a repetitive, regular task I despaired, because that’s exactly the kind of thing we have computers for.

You shouldn’t try to build the program that magically automates everything. But you should learn to smell the tasks where computers could help; learn to sniff out the angles on a story that a computer would be a useful tool for.

So that means when you find a table, or a regular data source, you don’t just take a print-out; ask for an Excel file, to convert to CSV, or maybe even a database dump. Even if you can’t do something with it, somebody else can. So the important thing to remember is what a progammer might want to receive.

When you’re gathering data, regularity is important. If you’re using Excel, keep it really simple, and one-column-per-thing, so that later a programmer can do something with the CSV. If you’re gathering textual information, put it in a plain text file, rather than Word; it’ll save you time in the long run.

Also: there are lots of useful tools that are halfway between being a programmer and not, and these are the most interesting spaces for the journalist right now. Simon linked to a bunch of these at the Guardian Hack Day, and it amazed me how many great tools there are for the non-programmer to do programmer-like tasks.

Excel, for starters, is a great environment (if a little limited and esoteric) for starting to explore datasets in a relatively visual way – structured data formats aren’t as immediate to more visual thinkers. Obvious examples include the frankly remarkable DabbleDB and, even though it’s never as useful as I hope it might be, Yahoo Pipes.

These let you exercise programmer-like thinking without needing to be a programmer. And then, when you’ve discovered what it is you want to do, even with the vaguest of prototypes, handing all your information and ideas over to a coder is much easier.

Why? Because you’ve already been thinking like a programmer. You’re handing them thoughts and data in the format they like.

So how do you learn this?

Partly, you have to try a bit of code yourself, but I’d make sure you’re always on the right side of the “understanding what I’m doing” vs “doing neat stuff” seesaw; understanding should be your goal.

Partly, it’s getting handy with a shell. One of the best places to explore what you can do with data is the command line; as well as the true scripting languages, there are tools like grep, sed and awk which can be remarkably powerful. Not entirely user-friendly, I’ll give you, but easier than breaking out a full program.

And partly, it’s relaxing a little and stepping away from the Office suite. Putting your data in formats like CSV, XML, JSON, and plain text doesn’t just make the data more useful to coders; it’ll be more useful for you, when you want to move it around.

I remain convinced there’s an interesting book on “doing smart stuff with computers that isn’t quite programming but isn’t far off”, because let’s face it, most people deal with data all the time now, and have the ideal tool for working with it on their desks. Now they just need to work with it a little.

So whilst this isn’t quite the “learning to code” that Charles speaks of, it’s not far off. And indeed, I think he hits the nail on the head much better in his conclusion:

…nowadays, computers are a sort of primary source too. You’ve got to learn to interrogate them effectively – and quote them meaningfully – too.

That feels about right. You don’t need to be a coder, but you need to be able to interrogate computers meaningfully. Do that how you will.

(As for me? Well, I wanted to be a journalist, but fate didn’t turn that way (although I’ve worked in the media and had a small amount of writing published). I did, however, seem to take to the coding malarkey a little better. I still maintain I’m not really a programmer, and certainly not in the sense that my real-programmer friends are, but evidence sometimes disproves that).

As a little post-Christmas present, I thought I’d share a little code toy I’ve been working on recently.

You might know that I’m a fan of Twitter as a messaging bus, and I’ve already built some entertainingly daft bots in my time with it. Recently, I decided to flex my programming skills a bit and build not one but four bots. And, more specifically: four bots that talk to each other.

Enter @louis_l4d, @zoey_l4d, @bill_l4d and @francis_l4d. You might notice that they’re named for the characters in Left 4 Dead.

This is not a coincidence.

One of the most wonderful things in that game (which I’ve already commented on the brilliance of) is the banter between the four player characters. There’s so much dense, specific scripting, and enough dialogue so that it rarely repeats. I thought it would be interesting to see if you could simulate the four players’ dialogue over Twitter, sharing some state between the bots, but also finding a way to make them communicate a little with each other.

Well, a bit later, I worked out how, and this is the result:

twit4dead.jpg

You get the picture. They run a scenario, they bump into boss zombies, they find stuff, they get hurt (and help each other), they get scared (and reassure each other). At the moment, there are some dialogue overlaps; my main work at the moment is adding more unique dialogue for each bot. Bill is sounding pretty good, but the rest of them need work. It takes about 2-3 hours for them to run a scenario, and it’s usually fun to watch. (And, as you can see, it makes sense to follow all four of them).

So how does it work?

It turned out that rather than trying to build any real AI, it was much more fun just to simulate intelligence. The bots are state machines; they have a variety of states, which they transition in or out of dependent on factors, and suitable dialogue for each state.

I wrote the bots in Ruby. There are two main components to the twit4dead code: the Actor class and the Stage class. The Stage is a singleton; it’s where the state of the world is determined, and global variables tracked. It’s also where all the probabilities are run from. The Actor class is what each of the bots are, and it’s based on the Alter Ego state-machine library for Ruby. We have a lot of states, rules for transition, a selection of methods to handle being helped or talked to by friends, and a method to choose a random piece of dialogue appropriate to the current state.

All the bots are instantiated from a YAML file. For each bot, I store its Twitter username and password, and a nested tree of dialogue for each state. This means it’s really easy to add new states and maintain the dialogue for each bot. It’s also easy to add new bots – you just create a top-level entry in the YAML file.

Originally, I thought about the bots broadcasting and listening over Twitter, but the API calls were just going to get out of hand, and it turned out that Twitter didn’t like being bombarded with messages and would drop a few over time. So I separated out writing the script and broadcasting it. A small utility generates a script file; each line of this file consists of three delimited fields for username, password, and message to send to Twitter. Then, another program – which I currently run on a screened shell – reads that file and broadcasts one line of it every minute until it’s done.

And that’s it. I have to run it by hand for now, which is fine – it’s more something I fire up every now and then, rather than something you want to permanently run. I originally was going to keep track of loads of statistics – health, zombies in play, etc – but found a cruder set of rules worked much better. Every time they’re in combat, there’s a slim chance somebody gets hurt; every time somebody’s hurt, a friend will rush to save them. That sort of thing. Simulated Intelligence, then, rather than Artificial Intelligence.

Alter Ego turned out to be a lovely library; dead simple to use, and as a result the bots are really nicely modelled (or, at least, I’m very happy with how they’re modelled). The notion of a Stage with Actors, rather than a Game with Players, feels about right, and the modularity of it all is pretty nifty. It still requires a little refactoring, but the architecture is solid, and I’m proud of that.

I think my favourite aspect of it, though, is that at times, watching the bots play together is a little like magic. The first time I saw them talk to each other, cover each other whilst reloading, help each other up after a Boomer attacked, I felt a little (only a little, mind) like a proud father. They’re dumb as a sack of hammers, but they look convincing, and that was the real goal. It’s fun to watch them fight the horde amidst all my other friends on Twitter.

Nonsense, then, but a fun learning exercise about state machines, object orientation, and simulating conversations. State machines are a ton of fun and if you’ve not seriously played with them, I thoroughly recommend it.

Do follow the four of them, if you fancy; I’ll make sure I run them with reasonable regularity, and I’ll be fixing the dialogue over time. After all, I want to keep Francis happy.

Plans are afoot…

13 December 2008

The latest in a series of dumb experiments with Twitter is nearly complete. The above should give you a bit of a clue; all you need to know is that somewhere in the Twitterverse, it’s the zombie apocalypse, and @bill_l4d, @francis_l4d, @louis_l4d and @zoey_l4d are doing their best to survive. If you think that sounds somewhat similar to Valve’ popular zombie-slaying co-operative FPS, Left 4 Dead… you might be on to something.

You might want to friend them all. More to come, very soon.

My recent talk about what might happen if gamers ran the world made Digg yesterday, and went a bit big. Big to the point that I got a nice email from my host pointing out that my PHP processes were killing the entire shared host that I’m hosted on, and that I needed to rectify this immediately.

The fires were mainly calmed by installing WP-Super-Cache, which did pretty much what it says on the tin. That said, I did learn a few things from the incident. In no particular order:

  • WordPress’ PHP processes for rendering a page are really, really intensive. Most of the time, that’s not a problem, but when you’re being bombarded with hits, it’ll take it’s told. Flat HTML might be the way forward.
  • Super-Cache isn’t exactly difficult to install, but it requires permissions in lots of places. The best advice I can give is to walk through the installation instructions carefully, and when it doesn’t work, go over the troubleshooting guide in readme.txt one step at a time. The few issues I had were resolved by walking through the troubleshooting process.
  • Most importantly, a combination of the two parts above: you should assume that at some point, you’re going to need this kind of caching, and you’re going to need it fast. Installing and configuring WordPress plugins on a server being bombarded with hits really isn’t much fun. Instead, install the caching plugin of your choice when you set your server up, and make sure it’s working at that time. Then, when the horde descends upon your lowly shared host, you can head straight there and click “enable caching”, rather than having to fight fires for an hour when you really should be working, or in the pub. This also means you can configure the thing to not cache your feed, which is a useful thing to be able to do; I’m about to head off and do that now.

Everything appears to have cooled off now, and I’m not getting any more emails from Joyent about my usage. To Joyent’s credit, they were helpful at explaining the problem and tolerant of the time it took for me to fix things, which was appreciated. And next time I get an absurd amount of traffic, with any luck, I’ll be ready for it.