Fun With Software

17 January 2020

The impetus for personal projects is, for me, often like coming up with a joke. You want to tell it as soon as possible, but you also want to tell it as well as possible. And you don’t want anybody else to do it before you, because nobody cares about you coming up with the same joke a bit later.

That means my thought process for making things is often a bit like this:

  • Somebody should do X
  • That sounds quite easy.
  • If I don’t do it, somebody else will.
  • If it’s easy, that will be quite soon.
  • I should do X as soon as possible.

As time has gone on, my difficulty threshold has gone down: if it’s going to take more than an evening’s work, I’m not really sure I can be bothered. Especially if it requires hosting and maintenance. But sometimes, the perfect set of conditions arrive, and I need to make some nonsense.

This is how I ended up writing a somewhat silly SparkAR filter in an afternoon.


SparkAr is Facebook’s platform for making “augmented reality” effects for Instagram and Facebook camera. That translates into “realtime 2D/3D image manipulation”, rather than anything remarkable involving magic glasses. For the time being, anyhow.

Effects might work in 2D, through pixel processing or compositing, or by using 3D assets and technologies like head- and gesture-tracking to map that 3D into the scene. Some “effects” are like filters. They stylize and alter an image, much like the photo filters we know and love, and you might want to use them again and again.

Others are like jokes. They land strongly, once, and from that point on with diminishing returns. But the first point of landing is delightful.

Mgs point

In the popular Playstation game Metal Gear Solid from 1998, the hero, Solid Snake, sneaks around an Alaskan missile base, outgunned and outnumbered. In general, he will always do best by evading guards.

When a guard spots Snake, the guard’s attention is denoted by an instantly recognizable “alert” sound, and an exclamation mark hovering over his head. At this point, Snake must run away, or be hunted down.

The exclamation mark – and sound – have become a feature throughout the entire franchise, and gone on to be a bit of a meme.

That is all you need to know about Metal Gear Solid.

I wanted to to make a filter that recreated the effect of the alert, placing the exclamation mark above a user’s head, tracking the position of their head accurately, and – most importantly – playing that stupid sound.


In Spark this is, by most programming standards, trivial. Facebook supply many, many tutorials with the product (and, compared to the dark days of their API documentation nine or ten years ago, their new documentation standards are excellent).

I started with a template for making 3D hats, which matches a 3D object to a user’s head, and occludes any part of the the object falling ‘behind’ their head. Then I just had to lightly adjust it, replacing the hat with a flat plane that displayed a texture of the exclamation mark I’d done my best to recreate. That was half the work: putting the image in the right place.

What was more interesting was determining how the mark should pop-in, playing the iconic sound effect at the same time. The face-tracker offers lots of ways to extract the positions of facial geometry, and I spent a while tracking how far a mouth was open in order to trigger the effect. Eventually, though, I settled on the “raised eyebrows” outlet as being a much better vector for communicating surprise.

Spark screengrab2

Some brief faffing in Spark’s node-based programming environment later, and now raising eyebrows triggered the brief animation of the exclamation mark popping in, and the corresponding sound effect.

Aside: I am not the biggest fan of graph/node-based coding, partly because I’m not a very spatial thinker, and partly because I’m already comfortable with the abstractions of text-based code. But this model really does make sense for code that’s functioning more like a long-running pipeline than a single imperative script. You find this idiom in game-engine scripting (notably in Unreal 4), visual effects tools, realtime graphics tools, and similar, and it is a good fit for these tools. Also: the kind of people coming to work in SparkAR will likely have experience of 3D or video tools. Increasingly, as I pointed out to a friend, “everything is VFX“, and so graph-based coding is in increasingly more places than it used to be.

I also made sure I supported tapping the screen to toggle the effect’s visibility. After all, not everyone has reliably detectable eyebrows, and sometimes, you’d like to use this imagery without having to look surprised all the time.

Finally, I added some muddy scanlines to capture the feel of 90s-era CRT rendering: a slight tint for that authentic Shadow Moses feel, coupled with rolling animation of the texture.

With this all working, I promptly submitted it for review, so I could share it with friends.

The output looks like this:

Mgsthumb

As with every single workflow where there is a definite, final submission – be it compilation, code review, or manufacture – this is exactly the point in time when I realise I wanted to make a bunch of changes.

Firstly, I felt the effect should include some textual instructions; if you haven’t been shown it by me, you might not know how it works. So I worked up adding them.

Secondly, I felt the scanlines should be optional. Whilst they’re fun, some visual jokes might be better off without them. So I wrote some Javascript to make a native option picker at the bottom of the effect, with the default being “scanlines on”.

Once version one had been approved, the point release was quickly accepted. And then I shared the filter in my story, and wondered if friends would borrow it.

(“Borrow”: the only way to share filters/effects it to use them yourself. Once they’re published, you have to use them in a story post of your own. Then, if a friend wants to “borrow” it, they can grab the filter/effect from your story featuring that effect. In short: the only way to distribute them is through enforced virality. I can even share them outside my protected account. I am fine with this; I think it’s rather smart).


When I first shared it with friends, one wrote back: “yay for fun with software“.

It’s a while since I’ve worked on a platform that’s wanted to be fun. I’ve made my own fun with software, making tools to make art or sounds, for instance. But in 2020, so much of the software I use wants me to not have fun.

I made my first bot on Twitter a long time ago, and from there, Twitter became my platform of choice for making software-based jokes.

Twitter is now very hard to make jokes on. The word ‘bot’ has come to stand for not ‘fun software toy’ but ‘bad actor from a foreign state’. The API is increasingly more restricted as a result. I’m required to regularly log in to prove an account is real. My accounts aren’t real: they’re toys, automatons, playing on the internet.

I get why these restrictions are in place. I don’t like bad actors spreading misinformation, lies, and propaganda. But I’m still allowed to be sad the the cost of that is making toys and art on the platform. (An art project I built for the artist Erica Scourti was finally turned off once API restrictions made it unviable).

Most software-managed platforms are not places you can play any more. I understand why, but I still wish that wasn’t the case.

Yes, I know Facebook, who are hardly a great example of a good actor, are getting me to popularise their platform and work on it for free. It’s a highly closed platform, and I’m sure they’ll monetise it when they work out how. I’m giving them free advertising just by writing this.

But. Largely, what I made is a stupid visual effect that neither they nor I can effectively monetise, and it’s a joke is better told than not told. In that case, let’s tell the joke.


I showed it to my friend Eliot:

Every single time I hear that sound and see it working, I laugh. Actually, sometimes I don’t: I’m busy holding a straight face with my eyebrows up, and somebody else is laughing. Either way, somebody laughs. And that’s good enough for me.

I know, intuitively, this is a not unproblematic development platform. I know it’s not really ‘free’ in the slightest. But I write this because, right now, it was a quiet delight to be allowed to make toys and play on somebody’s platform, and one of the more pleasant platforms they run (if you keep it small, private, and friendly). I’m sure they’ll mall-ify this like everything else, but for now, I’m going to enjoy the play it enables.

It’s a while since I’ve made a toy that was so immediate to build, so immediately fun for others.

Yay for fun with software.

(If you want to try the effect yourself, open this link on your phone. It’ll open Instagram and let you try the effect.)

An e-ink screen for a room

31 January 2019

IMG 20190131 102904

I made a small display for my living room.

This project began when we installed a Hive thermostat at home. For various reasons, the thermostat ended up on the upstairs landing, and I thought it’d be nice to have a second display for it downstairs. I know I can look at the app on my phone, but I thought something more ambient might be nice. And, whilst I was at it, one feature I’d always wanted – the next trains at my nearest stations – and perhaps the outdoor temperature and weather as well.

This all coincided with Bryan Boyer writing up his Very Slow Movie Player project, which is a delight. (It shows a movie at a frame a minute, on a large e-ink screen).

I’ve been fascinated with e-ink for a long while. I’m well into the lifespan of my second Kindle, and it’s such a wonderfully un-technological piece of technology. It’s still, to my mind, the single best piece of hardware Amazon have made by a long, long way.

I wrote about the joy of e-ink when I was at Berg, in Asleep and Awake. As Bryan proved, it’s now fairly easy to both source e-inks screens and also to interface with them. The 2.7″ screen I used came ready-attached to a Raspberry Pi HAT, with libraries all written for it.

(As an aside: I’ve been tinkering with Raspberry Pis for a while, but this was the first time I’ve used the Zero form factor; the £10 Zero W, with built-in wireless and bluetooth, is such a lovely fit for this project. It does make the thought of doing lower-level embedded work seem a little foolish for simple IOT prototypes – lots of power and connectivity, and the ability to write high-level code is a delight.)

So: I had a screen, and a Pi. I started with the output: getting a PNG of mocked-up UI to display on the screen. This didn’t take too long, although I’ve had no joy getting partial updates to work – I’m using a fairly heavyweight full screen refresh each time the screen updates. Still, that’s an improvement I can come to later.

With the sample PNG on the screen, there were two remaining strands of work: dynamically generating images, and gathering data to feed them. Again, I worked on the former first, using Pillow. I’m not a great Python developer by any stretch, but a recent work project featuring a lot of it made me a lot more comfortable with hacking on the language, and Pillow’s a lovely tool for simple image compositing. Google’s Roboto font and Erik Flowers’ WeatherIcons do most of the legwork; the rest is simple compositing, step by step.

Once the Python image processing was written, I moved onto data-scraping. I’m happiest in Ruby, but chose Javascript for this work. Why? Partly for how appropriate it was for dealing with lots of JSON, and partly to get more familiar with ES6. I ended up with three scripts: one to get the latest set and actual temperatures from the Hive API; one to get the next trains at some stations; and one to query the Dark Sky API for local weather. These would write out to JSON via lowdb. Then, the Python script could, separately, read that JSON directly, render a PNG, and trigger a refresh of the screen.

Having broken the task down, everything went almost entirely as planned. One by one, I replaced each section of the screen with live content. The only hiccups were the usual wrestling with cron jobs (when do these ever go smoothly?) and dealing with simultaneous writes to the JSON store. (It turned out lowdb wasn’t great for distributing across multiple files, and rather than rewrite everything to use SQLite, I just went with three separate JSON files. Easy fix). Nice to spend a few hours at the weekend motoring on some programming I’m entirely comfortable with: JSON, markup-scraping, server-side fettling and graphics processing.

I’m happy with the results. There’s a bit of a distracting blink when the display re-renders, but the lack of a glow makes it feel very different to a more obviously electronic device. It just sits, comfortable with itself, giving me a little bit of information. I was surprised how many people enjoyed it when I shared it on Instagram, so thought I’d write it up.

What’s next? I might add something else to that lower-right display; not sure what, yet. And, most importantly, I’m going to give it some kind of case – it’s a bit too ‘gadgety’ or ‘maker-y’ as it stands; it needs to be made more homely. Perhaps some wood. In the meantime, though, I’ve been living with it, and had no desire to repurpose it, or switch it off, which is usually the best sign.

Most notably: it continues to affirm my belief that e-ink is a most gentle and domestic technology. I can confirm that it’s now very easy to play with, if you’re so inclined; a Python library and bolting a £30 HAT onto a Pi was all this took to get live. I might do some more tinkering with this technology in due course.

I upgraded my Mac recently, and moved from a laptop with two internal drives – an SSD and an HDD – to a single, large SSD. I wanted to note down the aspects of this transition I’d have like to have known beforehand, and also want to know for

My old computer placed various core user folders – Documents, Music, etc – onto the HDD, whilst storing the System and Applications on the SSD. I symlinked the User folders into my home directory, and OSX was none the wiser.

As far as backing up goes, I used to use the excellent SuperDuper. When I moved to my symlinked setup, I went to Time Machine, which can back up multiple drives to a single Time Machine volume. I told it to back up both the SSD and HDD; it backed up the symlinks themselves from the SSD, and the directories they referred to from the HDD.

When I transfer data to a new Mac with Migration Assistant, I do so from a backup or bootable clone of the original computer – never from the computer itself. I chose to restore from Time Machine. This will only restore from a bootable drive – so I was only given the option to restore from my SSD, which contained Applications and my Desktop, but not my Documents. This unnerved me a bit at the time.

However, this is because the Migration Assistant only offers to restore from bootable (system) drives. Once I’d restored, I booted up the new system, mounted my Time Machine drive, and dragged everything over from the “Latest” directory symlink inside the HDD’s backup folder; I soon had my documents and other files back in the original locations they should have been.

So that’s the main note I wanted to make: restoring from that kind of setup works entirely fine, but you musn’t panic when your HDD isn’t offered as a volume to restore from.

I’m always impressed with Migration Assistant – it holds onto my system preferences, my eccentric Unix setup, my MySQL and Postgres databases, everything like that. What’s worth remembering is the stuff it doesn’t:

  • I use Caps Lock as my Control key, and vice versa. (Blame Vim). I had to re-specify this preference, and frequently, it would get overwritten on wake-from-sleep. A reboot (which presumably repaired permissions fixed this).
  • For a while, I noticed git wasn’t working. This was because it was not installed through homebrew, but directly into /usr/local/bin – and that was no longer in my PATH. Adding it back to my PATH re-enabled it. (All my homebrew binaries were working fine).
  • My /etc/hosts had been overwritten back to default; I need to copy that across manually. As had all my Apache config (/etc/apache2/httpd.conf), my vhosts config file (/etc/apache2/extra/httpd-vhost.conf), and /etc/php.ini . Nothing we can’t recover quickly, but it did mean the usual dance of setting up vhosts for sites that already existed, and pointing php.ini at the correct MySQL socket file (/tmp/mysql.sock on OSX).

I think that’s it – everything else transferred entirely seamlessly, and there was nothing to fear about my unusual setup – you just have to make sure you’ve been backing up correctly, and everything will work.

I’m working on a hardware project at the moment that’s more complex than a basic microcontroller-build that I could have implemented with an Arduino. So I’ve been using a Raspberry Pi as the centre of the project, and writing my code in the high-level languages I’m most proficient in. (In this case, Ruby).

As the project nears completion, it’s really important that the device doesn’t manifest as a computer in any way: it’s just a physical object. To that end, I’d like all the software inside the miniature computer to run at startup without any manual intervention.

My Pi is running Raspbian – the Pi-focused version of Debian – so, fortunately, there’s a tool easily available to do that for us: upstart.

We’ll write an upstart configuration for our script, which will turn our script into an upstart service, which we can then start at login.

First, let’s install upstart:

sudo apt-get install upstart

This will issue some warnings, because – on my install – it was replacing the traditional init.d setup. Don’t worry; everything will continue to work.

Once you’ve installed upstart, reboot your Pi, either from the command line or with a power-cycle.

Let’s now make an upstart config file. Here’s a very basic one:

Put this code into /etc/init/myscript.conf. You should then be able to run the script by typing sudo start myscript, and kill it with sudo stop myscript. And, of course, you’ll discover it’ll start automatically on startup.

That’s a very simple example, with no dependencies. But that won’t work for the script I’d like to use. In this example, I’m running a Ruby script (using the Pi Piper library) that blinks an LED attached to GPIO Pin 17. That script needs to be run as root to get access to the GPIO pins, and it needs to reference the directory’s Gemfile. Upstart scripts are run as root, so that’s not an issue – but we need to set up the environment correctly. That’s not so hard:

As you can see, we just have to export the BUNDLE_GEMFILE variable so that Bundler will know where the Gemfile is located.

Also, you’re going to have to make sure that all references to files in your code are done with absolute paths. That wasn’t a problem with the simple shell script example, but becomes an issue with more real-world type code – especially the program I’m ultimately running, which has various includes, dependencies, and data files to load. An obvious place you’ll run into this with Ruby is when setting up the $LOAD_PATH.

Rather than starting my Ruby script

$LOAD_PATH << 'lib'

I had to do this:

$LOAD_PATH << File.expand_path(File.dirname(__FILE__)) + '/lib'

And similarly, any other references to file loading will need to be absolute – and, ideally, derived using tricks like File.dirname(__FILE__) rather than through hardcoded paths.

Anyhow: it took me a while to piece together, but now that I have, it felt worth writing down – because I’ve now got reasonable complex computation-backed hardware working in an entirely headless enclosure – and one that’s resilient to power-cycling.

Ars Technica has a short article for HyperCard’s 25th Birthday.

I’m not sure I quite buy the notion of HyperCard as proto-web-browser. But I totally buy Atkinson’s original goals with it:

“Simply put, HyperCard is a software erector set that lets non-programmers put together interactive information”

It was not the first thing I wrote software in – that honour goes to GW-BASIC, I think – but it was the first tool I made something useful and unprovoked in. I was eight or nine when I discovered it at school. It made it possible to realise what was in my head, not what was in a book.

And it was the first thing that made designing the visual interactions of software easy for me. Software isn’t just arithmetic and lines of code – it’s something people use. HyperCard made sure that the visual end of software was usually the first part of a stack you made, not the last. (I was always disappointed that Visual Basic looked like it did this, but it didn’t quite live up to expectations).

Look at XCode now, with its integrated Interface Builder; that’s one of the many legacies of HyperCard. It showed the average computer user (not the average programmer) that interaction and interface was important to great computing experiences, and gave them the tools to poke around.

It is a tiny percentage of the reason I do what I do now, but a memorable one.

A new year, and a new toy to begin it.

This all began when Tom started tweeting the prose from the back of a chocolate box.

Tt tweets

One look at that and, having gagged a little on the truly purple prose, there was only one obvious continuation: a machine to churn out chocolate descriptions infinitely.

Which was as good a time as any to play with Markov chains. Wikipedia will explain in more detail, but if you’ve never encountered them, a very rough explanation is: Markov chains are systems that model what the next item in a list will be based on the previous ones. The more previous items you have, the better it can predict the next thing.

They’re often used in toy text generators. You give them source text to seed them, randomly pick a word from the source text, and then start choosing what should come next. What’s nice about this is with nothing other than a piece of maths, and a tight corpus, we can produce things that usually read like English without having to teach a computer something as complex as grammar. Of course, sometimes you get grammatical-yet-nonsensical English out, but that’s hardly in a problem in our case.

So I took the full prose from the back of Tom’s chocolates (Thornton’s Premium selection, for reference), some Markov text-generation code from an illuminating installment of Rubyquiz, and fiddled for a bit.

A short piece of work later and I had Markov Chocolates.

Markov

Roughly once every four hours (but it varies), you’ll get a fresh, tasty new Markov Chocolate in your Twitter feed. It’s another of my daft toys, but it still makes me chuckle. I’m thinking of expanding the corpus soon, and I hear the Markov coroporation are keen to branch out into new product lines. For now, you can get your chocolate fix here.

Drivetime Spotify App

04 December 2011

Drivetime

Music Hack Day London this weekend. Tom, Blaine and I ended up making a thing which didn’t demo perfectly, called Drivetime.

It solves a common problem: how do I listen to the same classic pop and rock anthems on a Friday afternoon as people in other studios? The answer is with the Drivetime app.

Drivetime is a Spotify app. You drag a playlist into it to start broadcasting – you set your station name with a simple text field. Then, anyone loading the app can click on your name and hear what you’re playing. Not just the current track – it’ll start the track for others at exactly the point you’re currently at, and it’ll advance through the playlist when a track finishes. At any point, you can stop listening, and choose another song to listen to.

That’s it, really: a simple broadcast/listen system, all built into Spotify as a native application.

Drivetime Screengrab

The front-end is just HTML/CSS/Javascript, which is how Spotify apps are built; the communication with the client is via socket.io, on top of node.js on the server. Tom and Blaine wrote the back-end; I wrote a bunch of the front-end, which Blaine promptly made much better, and did all the hot pink neon. We all played a lot of Tina Turner in the hacking rooms, and we went home to bed betimes on the Saturday night.

All the code’s in github. The Spotify app only runs (at the moment) if you’re a Spotify developer. We might see if we can get it a bit further and release itspot, so people can listen to one another’s classic rock selections.

It was a fun hack, even if my explanation of it was incoherent, and part of a great weekend – so many superb hacks, large and small, on display.

I was all ready to get really worked up about this post from Wieden + Kennedy, on “why we’re not hiring creative technologists any more; we’re hiring coders”.

Then I went and read it, and basically agreed with it all very fundamentally. In a nutshell: it’s not resisting the name, or the approach; it’s resisting the idea that it’s something you can pick up quickly on a course to teach you to become one. Which is, like so many similar things, nonsense; I hadn’t realised we’d hit that point on the sliding scale. When Igor says

“Only hire people to work at the crossover of creative and technology if they have strong, practical, current coding skills.”

I say: of course; why would you do otherwise? I thought that’s what that job title meant. I seriously didn’t realise that was happening, and I’m very concerned it does. This is worth taking very seriously: if you want people to think through software, they need to be able to make that software. Not wave their hands around and have ideas about technology. We think with our hands, be we artists, designers, developers, or writers. Having another layer of people to “have ideas” is not what you need. Ideas are free.

I still use that particular title to describe myself a lot, simply because I’m not best at being your average Joe Developer. I can; I have been; it’s just not my sweet spot. I’ve made reasonable scale projects that work well; I understand how to go from a fragile prototype and turn it into solid reality, and what making things work under load looks like: and yet the bit I’m good at, the bit I care about, is the going-from-nothing-to-something-working. How will you know what a thing is until you’ve held it in your hand? How fast can you change it as you learn from it? When’s it best to step away from vim and go back to pen and paper? That’s me.

So: I totally agree with Igor about the fact that whatever you call that role, it has to have solid, actual coding chops. Not a smattering of Processing here and some weak PHP there: actual, full-on, end-to-end skills. Code that’s live in the world.

But: the most interesting thing in the article wasn’t even the stuff about Creative Technology. It was about what it means to be an agency – or, being honest, a company – that wants to engage with technology through staff members like these.

While you don’t need to become an engineering company, you face some of their challenges. You need to understand, accept and embrace some of the nuts and bolts of software development, and take on board the work dedicated shops are doing on its processes. You need such a strong streak of code running through the atmosphere that coders want to come to you, and everyone else gets code spilling over them.

This is so true. You can’t just slap technologists or developers into a company to become a technology company. Technology has its own heartbeat, its own demands. You have to begin to wrestle with the processes of an engineering company, of an attitude that leads to better work. You have to learn how it’s going to shape your culture – by which I mean, how you want it to. You get to choose; you get to control these things. It will change it, that’s for certain, but you get to hae some control over it. And similarly: you have to resist it just enough to stop becoming nothing but a software house; to retain the “creative” streak you were trying to hang onto when you started hiring for that job title.

The article it ends in this nugget:

this is hard, and it’ll take time. It’s not just procedural, but cultural, so a big part of doing it comes down to who you hire and how you let them do their thing. But that’s exactly the point. That’s why it’s most important, way before you get all that fixed, and as the first major step on that road: just don’t hire “creative technologists” who aren’t strong coders.

Yep. That’s his real point: the headline is attention-grabbing, but here’s the meat, and the most important line here is this is hard and it’ll take time.

It’s a cracking post. It’s all true. I’m going to stick to my guns and say I’m a technologist, of some kind: what I am best at is not one thing, but a mish-mash of things, and I’m better for the diversity of them. But I’ll also stick my head up and say yes, at the end of the day, if you want the Whole Thing Just Made: I will do that. I can do that. That’s why I get to use the T-word.

My only other advice for filling these positions: you don’t just need people who can do these things; you need people who can’t not do these things. Their instinct when faced with problems ought to be “let’s see what works; let’s check the assumptions we’re making are true by Just Doing It.” It’s not about jumping the gun: it’s recognising when you need to feel something, rather than guess something. And you don’t want to have to train that: you want people who just have to know for themselves.

So yeah, if you’re in one of those places that isn’t a software company, but you increasingly need to be a software company because, as Igor says, we have to be – that’s what the modern world now looks like – then it’s a really, really sharp piece of writing. I went in sceptical, but really: it was telling me what I already believed, and confirming it, and that’s a good thing, because it’s a message that needs to be written, not just assumed we all know. Good stuff.

The Setup

06 August 2011

A while back, I was interviewed for The Setup, and that interview is now up on the site. In it, I talk about what I use to get stuff (vaguely) done. Almost none of it will be surprising if you know me and what I do, but worth linking to nontheless.

If you’re working in Ruby, you’re probably using Bundler. And if you’re using bundler, you’ll probably know that typing bundle install foo will install your bundle to a directory called foo.

Of course, the problem is that Bundler remembers this configuration, and if you now run bundle install, you’ll install your bundle to… foo.

This is annoying. It’s especially annoying if you never meant to install to foo, and that was just a typo.

So: if you want to reset bundler to installing to the default location – which is your system’s current gem folder – you’re going to spend up a good hour messing around on Google looking for a plain English solution.

Can you guess who did this, and who this article is written for? That’s right, it’s me in the past!

Your solution: just run bundle install --system. That’ll install your bundle to the default system location – and continue to do so in the future. Problem solved.

(As usual, when I write about how to do something technically, it’s because I couldn’t find the answer. That’s all.)