Panoramical1

Panoramical is out, and it’s been lovely to play with: a series of visual and sonic canvases to manipulate and explore.

I’ve been keen to play it partly because of its support for interesting controllers not traditionally associated with videogames. An early version was played with the eight faders of the Korg Nanokontrol; Brendan Byrne built a limited edition controller, the Panoramical Pod, for the final release. What’s really interesting is that Panoramical also supports any controller that spits out MIDI – and so I started thinking about building my own controller(s) for it. I’ve long been interested in controller design, and have built a few unusual interfaces for music and computers in recent years. So I got to thinking about what I could do with Panoramical.

The world of Panoramical is manipulated via eighteen variables. These are describe inside Panoramical as nine, two-axis variables. With a keyboard and mouse, you hold a key (or selection of keys) to pick variables to manipulate, and move the mouse to alter them.

The Pod has a row for the “horizontal” component of a variable, and a row for the “vertical” component of a variable. I’m not sure this is a elegant or interesting as controlling both axes with a single input. I investigated non-spring-returning analogue joysticks, much like found on the VCS3, but they all appear to be horrendously expensive as components. (Cheaper joysticks, that would need the spring removing, have an unpleasant deadzone, which I don’t think would be ideal for Panoramical).

And then I had an idea: rather than building a controller out of hardware, I could start by building one in TouchOSC.

TouchOSC is a programmable control surface for iOS and Android tablets that spits out MIDI or OSC data. It’s possible to design interfaces yourself, in an editor on a computer, before transferring the UI to your tablet. You can then connect the tablet TouchOSC app to a ‘bridge’ on your computer in order to spit out MIDI data to anything that can receive it.

One of the many UI components on offer is an X-Y pad.

Panoramical2

This is my controller for Panoramical. Nine X-Y pads, in a 3×3 grid – much like the lower left of the screen. It didn’t take too long to lay this out, although I went back and added some padding around the pads – otherwise, you’d slide off one into another, leading to unintentional lurches in values.

It didn’t take long to layout, although it really didn’t help that touchOSC doesn’t rotate its understanding of ‘vertical’ and ‘horizontal’ when you design a landscape mode – that was confusing for a while.

Panoramical3

I also added a second ‘page’ with eighteen rotary encoders, as per the official Panoramical Pod. I don’t use this to play Panoramical, but it makes it much easier to configure the mapping.

Is it fun to play with?

I definitely think so. The multitouch UI gives you control over many parameters at once, and the coupling of the related variables (each XY pair basically control similar elements, but have different effects) makes a lot of sense – the design of touchOSC’s pads even mirrors the visualisation in the lower right of the screen. It’s also fun to be able to use ten digits to play it, manipulating many things at once; it makes Panoramical feel even more like an instrument to be played, explored, and improvised with.

You can download my TouchOSC mapping here, if you’d like, and you can find out more about TouchOSC here. TouchOSC’s documentation is reasonably good, so I’m afraid I cant provide support on getting it running. Suffice to say: you’ll need the layout on your touch device, and the Bridge running on your computer. In Panoramical, set the MIDI input to TouchOSC Bridge, and then I recommend mapping the controls from the ‘rotaries‘ page on your device – it’s much easier to map from there – then, you can play Panoramical from the ‘pads’ page.

I recently had a problem with my Sony RX100 mk3: it wouldn’t automatically swap between displaying in the viewfinder and on the LCD.

If turned on with the viewfinder extended, the viewfinder alone would work; if turned on with the viewfinder shut, the LCD would work. But if the viewfinder was on, raising and lowering it to my eye wouldn’t swap between the two. I spent a while faffing with this, convinced it was broken, and failing to find anything on the internet to discuss this.

Anyhow, then I found this video which explains the problem, and takes a full two minutes to get to the point. So I’m re-iterating that point, in writing, for everybody using a search engine!

Long story short: if you’d guessed that the sensor that detects when it to your eye is playing up, possibly, because of dirt, you’re entirely right. What you might not have worked out is where that sensor is.

It’s here:

Sony eye sensor

It’s not in the viewfinder; it’s to the right of it, on the lip above the LCD. Mine didn’t look dirty, but I wiped it down a few times and sure enough, everything worked fine again. Problem solved, and one of the most useful features of this little camera worked properly again.

Fresh Lick Of Paint

14 January 2015

Quick note for the RSS crew: I’ve overhauled the design of infovore.org the tiniest bit. I think everything works Well Enough™.

Why? Primarily, because every time I look at it, I realise my eyesight is tired of 12px typefaces, and it was all a bit cluttered. So: something simple, a bit more spread out, none of that two-column nonsense, and a face I find readable. All of which was a good excuse to practice my Sass and learn a few new tricks.

And now, back to your regular schedule of endless Pinboard links and the odd post about something or other.

Week 16

04 February 2013

Over at my home-for-work, I write a bit about Week 16.

Week 15

28 January 2013

A new location for weeknotes: I’ve overhauled http://tomarmitage.com and will use it as a professional portfolio and outlet (whilst continuing this site as my primary home and blog). To that end, weeknotes will get published there, and I’ll make sure I link to all developments from over here. But you might like to check it out.

To that end, here’s week 15.

Last weekend, BERG invited a selection of friends to participate in their first Little Printer hackday. Over the course of a short Saturday, we were asked to explore the API for making “publications” for Little Printer, and test them out on sample devices.

I had a few ideas, but decided for expediency to return to my “Hello World” of connected things: Tower Bridge.

My publication would be something you could schedule at pretty much any time, on any day, and get a list of bridge lifts in the next 24 hours (if there were any).

I could have made this a very small, simple paragraph, to fit into a busy list of publications. Instead, I decided to explore the capacities of the Little Printer delivery as a medium.

I was interested in the visual capacity of the Printer: what could I communicate on a 2-inch wide strip of paper? All of BERG’s publications to date have been very beautiful, and the visual design of publications feels important – it’s one of the many things that distinguishes Little Printer, and I wanted to try to aspire to it at the very least.

So I built an Observer’s Guide to Tower Bridge, based on a chlidhood of Observer’s Guides and I-Spy books. As well as listing lifts for the next 24 hours, I’d show users pictures of the boat that would be going through, so they could identify it.

Image001

I also visually communicated which direction upstream and downstream are. I don’t think it’s immediately obvious to most people, and so the “downstream” icon shows that it’s towards Tower Bridge, whilst the offset of the “upstream” icon illustrates that it’s towards Big Ben and the Millennium Wheel. It felt like a natural way to make this clear visually, and was economical in the vertical dimension (which is one of Little Printer’s bigger constraints).

Image002

Early versions showed a photo for every lift, which turned out to make the publication too big: I needed to shrink that vertical axis. I did this by only including photos of an individual boat once per delivery – you don’t need multiple pictures of the same boat. The second time a boat passed through the bridge, I instead displayed a useful fact about it (if I knew one) – and otherwise, just the lift time.

Image003

By making the publication shorter, I also avoided an interesting side-effect of running the printhead too hard. The grey smudging you see above is where the printhead is running very hot having printed eight inches of bridge lifts and photos (it prints bottom-to-top, so the text is the “right way up”). Because I’d printed so much black up top, it seemed like the head had a bit of residual heat left that turns the paper grey. This is a side-effect of how thermal printers work. You don’t get this if you don’t go crazy with full-black in a long publication (and, indeed, none of the sample publications have any of these issues owing to their careful design) – a constraint I discovered through making.

The Observer’s Guide was an interesting experiment, but it made me appreciate the BERG in-house publications even more: they’re short and punchy, making a morning delivery of several things – bridge lifts, calendar details, Foursquare notes, a quote of the day – packed with information in a relatively short space.

I was pleased with my publication as an exploration of the platform. It’s not open-source because the LP API is very much work in progress, but rest assured, this was very much a live demo of real working code on a server I control. If I were making a functional tool, to be included with several publications, I’d definitely make something a lot shorter.

It was lovely to see Little Printer in the world and working away. It was also great to see so many other exciting publications, cranked out between 11am and 4:30pm. I think my favourite hack might have been Ben and Devin’s Paper Pets, but really, they were all charming.

Lots of fun then, and interesting to design against the physical constraint of a roll of thermal paper and a hot printhead.

So: a new season of Dan Harmon’s marvellous Community has begun in the US. It’s a very, very funny sitcom. It’s also a very funny sitcom that frequently plays on the expectation that the audience is deeply versed in pop culture, with entire episodes that pastiche movies and genres. You should watch it.

In S03E01, which aired last week, Abed – the TV geek inside the show – is distraught that his favourite show (Cougar Town) has been moved to mid-season – “never a good sign“. Afraid it’ll be cancelled, his friends try to find him a new favourite show. And, eventually, they stumble upon “a British sci-fi show that’s been on the air since 1962“:

Inspector Spacetime.

This is already a fairly brilliant joke – the phone box! The reboot-pastiching title card! And, you know, I hope it’ll return to haunt the rest of series.

But: then, the internet worked its magic.

The thing that has been entertaining me beyond all measure this week is Inspector Spacetime Confessions.

This is a tumblr account of a popular format: the “Confessions” format, in which fans of TV shows, books, movies, etc, post “secret” confessions about their take on characters, episodes, or arcs (sometimes, secret crushes) as text written across images. Amateur photoshop at its best. It was huge on Livejournal, and it’s ideally suited to Tumblr.

Except: there are, currently, about fifteen seconds of Inspector Spacetime in existence.

This, of course, does not matter when you’re TV literate. What’s happened is: fans are just making it up. They’re back extrapolating an entire chronology based on fifteen seconds of “tone”, and their entire knowledge of the Doctor Who canon.

So, they’re diving into gags about former Inspectors:

They’re torn about Stephen Fry:

The Steve Carrell TV movie wasn’t well received:

And of course, they’re concerned about pocket fruit:

But there are more sophisticated jokes emerging. Like this one:

This presumes, in the form of a “fan confession”, that: the showrunner of Inspector Spacetime is also running another show – Hercule – which appears to be a modern-day Poirot reboot, and of course, because Benedict Cumberbatch is starring in Hercule, he’ll never be the Inspector.

This is sophisticated on a bunch of levels, but its elegance is in the way that entire gag is contained in one sentence and a photograph.

Or how about this:

which presumes Inspector Spacetime lives in that land of fictional TV shows, and thus a fictional actor (Alexander Dane) who starred in Galaxy Quest really ought, one day, to return to SF as the Inspector.

There’s a slowly emerging canon, thanks in part to the Inspector Spacetime forum. A lot of the canon is useful – the DARSIT feels better than the CHRONO box, everyone’s sold on Fee-Line – but it’s sometimes nice to see people buck it, or introduce new ideas (and Inspectors) in the most throwaway of Confessions. All this, from a fifteen-second joke that we don’t know will continue (or if it’ll introduce continuity we don’t know about yet).

And yes, Dan Harmon knows about it.

In the week between the two most recent episodes of Community, this has given me a vast amount of joy; I’ve been rattling the various configurations of Inspectors and Associates in my head, trying to remember my favourite episodes of a sci-fi show that never existed. And then giggling at the ingenuity and brilliance of some of the other confessions appearing – of the whole fictional history they bring to life, of Liam Neeson’s run in the 80s or the creepiness of the Laughing Buddas.

It’s really hard to explain the joy (especially as someone fascinated by the inner workings of serial drama) that this brings me. It’s a funny kind of magic – it’s unofficial, didn’t happen on TV, and just relies of fans’ understandings of not only TV shows, but how telly itself works. The results are just brilliant.

I’m off to write my own confession now. There’s always room for one more.

Buried, somewhere in the inbox I use for the Tower Bridge account, was an email from Twitter Support. So, let’s get the apology out of the way: Twitter did contact me. It was buried in an old GMail account. And, sure enough, on the first of June, here we go:

Twitter responds to reports from trademark holders regarding the use of trademarks that we determine is misleading or confusing with regard to brand or business affiliation. It has come to our attention that your Twitter account is in violation of Twitter’s trademark policy:

http://support.twitter.com/entries/18367

This account has been suspended.

Let’s see what they have to say at the URL:

Using a company or business name, logo, or other trademark-protected materials in a manner that may mislead or confuse others with regard to its brand or business affiliation may be considered a trademark policy violation.

OK, I can see the reasoning behind that. There’s not much space in the Bio field to explain that it’s not official, but their policy is clear: it doesn’t matter if you were attempting to mislead; if there’s any likelihood of confusion, you’re breaking their rules.

They go on:

  • When there is a clear intent to mislead others through the unauthorized use of a trademark, Twitter will suspend the account and notify the account holder.
  • When we determine that an account appears to be confusing users, but is not purposefully passing itself off as the trademarked good or service, we give the account holder an opportunity to clear up any potential confusion. We may also release a username for the trademark holder’s active use.

OK. So, what they did was the first thing.

There was no intent to mislead. Seriously, what else would you call a bot that did this? I can think of several alternatives, but in 2008, it seemed obvious.

Does it break the current terms of service? Perhaps.

What I’m really, really annoyed by is this: I have not been giving opportunity to clear up the potential confusion. I’ve just had the account suspended, the username taken, and, well, that’s it.

The account didn’t pretend to pass itself off as a trademark, or a registered company, or as anything related to the exhibition that runs within the edifice. If it passed itself of as anything, it was the structure itself. And everybody knows that really, bridges don’t talk, and certainly not that politely. There’s an interesting question – perhaps for a separate, less emotional post – about the relationship between the Instrumented City and the corporations that own the things that are Instrumented – but that’s not for now.

For now: I’m going to pursue this with Twitter, and at least resurrect the bot somewhere. When it comes back, it seems unlikely it’ll be at the original account name.

Slight Outage

02 March 2011

This site was down for about four days, from Saturday, owing to a server failure.

We’re now back in the world, after a hair-raising few days. Usual service now resumed.

The Story of a Lost Bomber

23 January 2011

It was History Hack Day this weekend. My friend Ben Griffiths scraped the Commonwealth War Graves Commission’s register to try to contextualise the death of his great-uncle in World War II.

Before you read on, please do read his story. It’s worth your time.

Ben’s hack is intelligent and, as ever, he explains it with precision and grace. But really, it wasn’t the hack I wanted to draw to your attention; it was the story he tells.

Like many hacks at such events, it begins with a data, scraped or ingested, and Ben’s plotted it over time, marking the categories his great-uncle is represented by.

But data over time isn’t a story; it’s just data over time. A graph; or, if you like, a plot. What makes it a story? A storyteller; someone to intervene, to show you what lies between the points, what hangs off that skeleton. Someone to write narrative – or, in Ben’s case, to relate history, both world and personal.

I’m left, after all this, thinking of just how young these bomber boys were. Looking at this data has been a much more moving exercise than I was expecting.

I found it very affecting, too, but not just because I was looking at the data: I was looking at it through the lens that Ben offered me in the story he told. When you consider it’s the story of one tragic loss amid 12,395 others, you pause, reflect, and try to perhaps comprehend that.

In the end, I couldn’t, entirely, but I tried – and because somebody told me just one story, about one individual, his plane, and his colleagues, I perhaps came closer to an understanding than I otherwise might have. And, because of that, I’m very grateful Ben shared that single story. I’d call that a very worthwhile hack.