Greats

21 September 2019

Δεντρολίβανο says the packet on the table. And I, of course, know that this says DENTROLIBANO, pronounced in my head in a clear southern, English accent, every syllable delineated.

I do not know what Δεντρολίβανο is, and have to look further down the packet to realise that it is ROSEMARY.

I studied dead languages at school. (And, for reasons, a bit at University too).

Most of our peers didn’t understand why we’d do Greek. It seemed pointless, even more dead than Latin, and there was the hassle of a whole new alphabet to learn.

To me, it seemed obvious: someone gives you the chance to read words written over two thousand years ago. Wouldn’t you say yes? Wouldn’t you at least be curious?

 

Here is what I am left with:

  • ten years of Latin lets me stumble through gravestones and churches around the world, just enough vocabulary to decipher a decent amount (bar the eccentricities of Church Latin), and I can probably still scan poetry if I had to. It is exciting to look at stone, and see something come to life.
  • three years of Greek leaves me with a mere handful of words, practically no grammar, but I still know the alphabet.

What this translates to is: I can read road signs. It takes me longer than I’d like, which can be distracting when I’m driving, and there’s usually a romanisation underneath. But: I can read road signs!

I can read lots of other things too, speak them out loud, say them excitedly as we walk by or browse a menu.

I can speak the letters, and for every word that I recognise, either through old muscle memory of vocabulary, or, more likely, because it’s pretty similar to something in another language, there are a hundred more that I have no idea what they mean. (Like the Latin in churches, I fare better at the ancient sites – a few words in the stone at Messene, but mainly names, gods, goddesses, and my favourite of all, the long list of all the wrestlers at the Palaestra. At the pace I read it, it sounds like a classroom register).

And I definitely, absolutely, cannot pronounce it, as shopkeepers and restaurant staff across the Peloponnese can attest.

 

It’s not really DENTROLIBANO; it’s ‘dentrolivano’, spoken softly, with that beta becoming more like a soft ‘v’ in modern Greek pronunciation.

In my head, Greek is pronounced with the lugubrious tenor of my classics teacher. “ζῷον”, he says: “zdaw-ohn”, that omega extended with the lips in a perfect oh. (Zoon, “animal”, and off into zoological and so forth we go).

Dead languages read like history, but they sound like your classics teacher; all these ancient men and women (but mainly men) thousands of miles away, speaking in a plummy classroom accent where you can hear every letter and especially the endings of the words to catch their declension.

This is not what Greek sounds like any more, because Greek is not a dead language.

 

I knew this in theory, but I was really not prepared for how pretty it would be: those same characters spoken by tripping, delicate, mediterranean voices, breathy on the chis (but less on the breathings which I can’t see any more), all manner of rough edges smoothed, all those syllables neatly danced around. “ευχαριστώ!”, “thank you”; we get the Eucharist, the giving of thanks, from this, but here it is “ef’hristo!”, an everyday word that I find myself saying a great deal, somewhat apologetic at my lack of the rest of the language.

(We go to a chemist for some eye drops, which we manage to acquire between us, the chemist, the people in the queue and the chemist’s friends who hang out in the shop. I hear the old lady grumble something about Ελληνικά, and I want to say “Yes, I know! I’m annoyed I don’t speak Greek, you’re annoyed I don’t speak Greek, we’re all annoyed I don’t speak Greek!”. What I really say is: “ευχαριστώ!”)

 

Betas have become soft vs, upsilons are somewhere between an english “f” and “v”, the etas I say like “air” are now “ee”. It all makes sense when you think about it, but it is upside down to me. (My partner’s Greek colleague at work sighs when she tells him I studied Ancient Greek – “we had to do that at school, I hated it – it’s all backwards!” So we both agree on that, then).

But it’s alive, floating, bubbling. I think back to Xenophon’s Persian ExpeditionAnabasis IV, my set text at 16, written around 2400 years before I was taught it – and imagine all those men standing in the snow, marching on the spot in bare feet to keep warm (and in preference to the un-tanned sandals that froze to their colleagues’ feet), chattering in this rolling, living language. I have to admit, it makes more sense now.

I know better what their faces look like, and what their tongues sound like.

Peter Hoving on 16mm film

11 September 2019

I loved this ~45m documentary from Peter Hoving on shooting 16mm on a wind-up Bolex.

It delicately combines a technical overview of the Bolex camera (and, later, the editing process and sound sync systems)… with a look back at Hoving’s own first films from the sixties on it, the story of a life shooting moving images, a brief glimpse into social history of America.

All at a delicate, leisurely pace, with time for the images to breathe. No rushed cuts, no heavy edits; quite a lot of Milt Jackson on the soundtrack. Practically no attention paid to the conventions of the Youtube era.

Just a gentle, thoughtful film about making moving images.

Don’t leave writing to writers. Don’t delegate your area of interest and knowledge to people with stronger rhetorical resources. You’ll find your voice as you make your way. There is, however, one thing to learn from writers that non-writers don’t always understand. Most writers don’t write to express what they think. They write to figure out what they think. Writing is a process of discovery. Blogging is an essential tool toward meditating over an extended period of time on a subject you consider to be important.

Marc Weidenbaum on the value of straight-up blogging, in a place you own yourself. All of this. I’ve been quiet here – less quiet at my work site – but not absent, and knowing that this is mine, and that slowly, what I’m thinking about was always present – even in the Pinboard links – has value.

Panoramical1

Panoramical is out, and it’s been lovely to play with: a series of visual and sonic canvases to manipulate and explore.

I’ve been keen to play it partly because of its support for interesting controllers not traditionally associated with videogames. An early version was played with the eight faders of the Korg Nanokontrol; Brendan Byrne built a limited edition controller, the Panoramical Pod, for the final release. What’s really interesting is that Panoramical also supports any controller that spits out MIDI – and so I started thinking about building my own controller(s) for it. I’ve long been interested in controller design, and have built a few unusual interfaces for music and computers in recent years. So I got to thinking about what I could do with Panoramical.

The world of Panoramical is manipulated via eighteen variables. These are describe inside Panoramical as nine, two-axis variables. With a keyboard and mouse, you hold a key (or selection of keys) to pick variables to manipulate, and move the mouse to alter them.

The Pod has a row for the “horizontal” component of a variable, and a row for the “vertical” component of a variable. I’m not sure this is a elegant or interesting as controlling both axes with a single input. I investigated non-spring-returning analogue joysticks, much like found on the VCS3, but they all appear to be horrendously expensive as components. (Cheaper joysticks, that would need the spring removing, have an unpleasant deadzone, which I don’t think would be ideal for Panoramical).

And then I had an idea: rather than building a controller out of hardware, I could start by building one in TouchOSC.

TouchOSC is a programmable control surface for iOS and Android tablets that spits out MIDI or OSC data. It’s possible to design interfaces yourself, in an editor on a computer, before transferring the UI to your tablet. You can then connect the tablet TouchOSC app to a ‘bridge’ on your computer in order to spit out MIDI data to anything that can receive it.

One of the many UI components on offer is an X-Y pad.

Panoramical2

This is my controller for Panoramical. Nine X-Y pads, in a 3×3 grid – much like the lower left of the screen. It didn’t take too long to lay this out, although I went back and added some padding around the pads – otherwise, you’d slide off one into another, leading to unintentional lurches in values.

It didn’t take long to layout, although it really didn’t help that touchOSC doesn’t rotate its understanding of ‘vertical’ and ‘horizontal’ when you design a landscape mode – that was confusing for a while.

Panoramical3

I also added a second ‘page’ with eighteen rotary encoders, as per the official Panoramical Pod. I don’t use this to play Panoramical, but it makes it much easier to configure the mapping.

Is it fun to play with?

I definitely think so. The multitouch UI gives you control over many parameters at once, and the coupling of the related variables (each XY pair basically control similar elements, but have different effects) makes a lot of sense – the design of touchOSC’s pads even mirrors the visualisation in the lower right of the screen. It’s also fun to be able to use ten digits to play it, manipulating many things at once; it makes Panoramical feel even more like an instrument to be played, explored, and improvised with.

You can download my TouchOSC mapping here, if you’d like, and you can find out more about TouchOSC here. TouchOSC’s documentation is reasonably good, so I’m afraid I cant provide support on getting it running. Suffice to say: you’ll need the layout on your touch device, and the Bridge running on your computer. In Panoramical, set the MIDI input to TouchOSC Bridge, and then I recommend mapping the controls from the ‘rotaries‘ page on your device – it’s much easier to map from there – then, you can play Panoramical from the ‘pads’ page.

I recently had a problem with my Sony RX100 mk3: it wouldn’t automatically swap between displaying in the viewfinder and on the LCD.

If turned on with the viewfinder extended, the viewfinder alone would work; if turned on with the viewfinder shut, the LCD would work. But if the viewfinder was on, raising and lowering it to my eye wouldn’t swap between the two. I spent a while faffing with this, convinced it was broken, and failing to find anything on the internet to discuss this.

Anyhow, then I found this video which explains the problem, and takes a full two minutes to get to the point. So I’m re-iterating that point, in writing, for everybody using a search engine!

Long story short: if you’d guessed that the sensor that detects when it to your eye is playing up, possibly, because of dirt, you’re entirely right. What you might not have worked out is where that sensor is.

It’s here:

Sony eye sensor

It’s not in the viewfinder; it’s to the right of it, on the lip above the LCD. Mine didn’t look dirty, but I wiped it down a few times and sure enough, everything worked fine again. Problem solved, and one of the most useful features of this little camera worked properly again.

Fresh Lick Of Paint

14 January 2015

Quick note for the RSS crew: I’ve overhauled the design of infovore.org the tiniest bit. I think everything works Well Enough™.

Why? Primarily, because every time I look at it, I realise my eyesight is tired of 12px typefaces, and it was all a bit cluttered. So: something simple, a bit more spread out, none of that two-column nonsense, and a face I find readable. All of which was a good excuse to practice my Sass and learn a few new tricks.

And now, back to your regular schedule of endless Pinboard links and the odd post about something or other.

Week 16

04 February 2013

Over at my home-for-work, I write a bit about Week 16.

Week 15

28 January 2013

A new location for weeknotes: I’ve overhauled http://tomarmitage.com and will use it as a professional portfolio and outlet (whilst continuing this site as my primary home and blog). To that end, weeknotes will get published there, and I’ll make sure I link to all developments from over here. But you might like to check it out.

To that end, here’s week 15.

Last weekend, BERG invited a selection of friends to participate in their first Little Printer hackday. Over the course of a short Saturday, we were asked to explore the API for making “publications” for Little Printer, and test them out on sample devices.

I had a few ideas, but decided for expediency to return to my “Hello World” of connected things: Tower Bridge.

My publication would be something you could schedule at pretty much any time, on any day, and get a list of bridge lifts in the next 24 hours (if there were any).

I could have made this a very small, simple paragraph, to fit into a busy list of publications. Instead, I decided to explore the capacities of the Little Printer delivery as a medium.

I was interested in the visual capacity of the Printer: what could I communicate on a 2-inch wide strip of paper? All of BERG’s publications to date have been very beautiful, and the visual design of publications feels important – it’s one of the many things that distinguishes Little Printer, and I wanted to try to aspire to it at the very least.

So I built an Observer’s Guide to Tower Bridge, based on a chlidhood of Observer’s Guides and I-Spy books. As well as listing lifts for the next 24 hours, I’d show users pictures of the boat that would be going through, so they could identify it.

Image001

I also visually communicated which direction upstream and downstream are. I don’t think it’s immediately obvious to most people, and so the “downstream” icon shows that it’s towards Tower Bridge, whilst the offset of the “upstream” icon illustrates that it’s towards Big Ben and the Millennium Wheel. It felt like a natural way to make this clear visually, and was economical in the vertical dimension (which is one of Little Printer’s bigger constraints).

Image002

Early versions showed a photo for every lift, which turned out to make the publication too big: I needed to shrink that vertical axis. I did this by only including photos of an individual boat once per delivery – you don’t need multiple pictures of the same boat. The second time a boat passed through the bridge, I instead displayed a useful fact about it (if I knew one) – and otherwise, just the lift time.

Image003

By making the publication shorter, I also avoided an interesting side-effect of running the printhead too hard. The grey smudging you see above is where the printhead is running very hot having printed eight inches of bridge lifts and photos (it prints bottom-to-top, so the text is the “right way up”). Because I’d printed so much black up top, it seemed like the head had a bit of residual heat left that turns the paper grey. This is a side-effect of how thermal printers work. You don’t get this if you don’t go crazy with full-black in a long publication (and, indeed, none of the sample publications have any of these issues owing to their careful design) – a constraint I discovered through making.

The Observer’s Guide was an interesting experiment, but it made me appreciate the BERG in-house publications even more: they’re short and punchy, making a morning delivery of several things – bridge lifts, calendar details, Foursquare notes, a quote of the day – packed with information in a relatively short space.

I was pleased with my publication as an exploration of the platform. It’s not open-source because the LP API is very much work in progress, but rest assured, this was very much a live demo of real working code on a server I control. If I were making a functional tool, to be included with several publications, I’d definitely make something a lot shorter.

It was lovely to see Little Printer in the world and working away. It was also great to see so many other exciting publications, cranked out between 11am and 4:30pm. I think my favourite hack might have been Ben and Devin’s Paper Pets, but really, they were all charming.

Lots of fun then, and interesting to design against the physical constraint of a roll of thermal paper and a hot printhead.

So: a new season of Dan Harmon’s marvellous Community has begun in the US. It’s a very, very funny sitcom. It’s also a very funny sitcom that frequently plays on the expectation that the audience is deeply versed in pop culture, with entire episodes that pastiche movies and genres. You should watch it.

In S03E01, which aired last week, Abed – the TV geek inside the show – is distraught that his favourite show (Cougar Town) has been moved to mid-season – “never a good sign“. Afraid it’ll be cancelled, his friends try to find him a new favourite show. And, eventually, they stumble upon “a British sci-fi show that’s been on the air since 1962“:

Inspector Spacetime.

This is already a fairly brilliant joke – the phone box! The reboot-pastiching title card! And, you know, I hope it’ll return to haunt the rest of series.

But: then, the internet worked its magic.

The thing that has been entertaining me beyond all measure this week is Inspector Spacetime Confessions.

This is a tumblr account of a popular format: the “Confessions” format, in which fans of TV shows, books, movies, etc, post “secret” confessions about their take on characters, episodes, or arcs (sometimes, secret crushes) as text written across images. Amateur photoshop at its best. It was huge on Livejournal, and it’s ideally suited to Tumblr.

Except: there are, currently, about fifteen seconds of Inspector Spacetime in existence.

This, of course, does not matter when you’re TV literate. What’s happened is: fans are just making it up. They’re back extrapolating an entire chronology based on fifteen seconds of “tone”, and their entire knowledge of the Doctor Who canon.

So, they’re diving into gags about former Inspectors:

They’re torn about Stephen Fry:

The Steve Carrell TV movie wasn’t well received:

And of course, they’re concerned about pocket fruit:

But there are more sophisticated jokes emerging. Like this one:

This presumes, in the form of a “fan confession”, that: the showrunner of Inspector Spacetime is also running another show – Hercule – which appears to be a modern-day Poirot reboot, and of course, because Benedict Cumberbatch is starring in Hercule, he’ll never be the Inspector.

This is sophisticated on a bunch of levels, but its elegance is in the way that entire gag is contained in one sentence and a photograph.

Or how about this:

which presumes Inspector Spacetime lives in that land of fictional TV shows, and thus a fictional actor (Alexander Dane) who starred in Galaxy Quest really ought, one day, to return to SF as the Inspector.

There’s a slowly emerging canon, thanks in part to the Inspector Spacetime forum. A lot of the canon is useful – the DARSIT feels better than the CHRONO box, everyone’s sold on Fee-Line – but it’s sometimes nice to see people buck it, or introduce new ideas (and Inspectors) in the most throwaway of Confessions. All this, from a fifteen-second joke that we don’t know will continue (or if it’ll introduce continuity we don’t know about yet).

And yes, Dan Harmon knows about it.

In the week between the two most recent episodes of Community, this has given me a vast amount of joy; I’ve been rattling the various configurations of Inspectors and Associates in my head, trying to remember my favourite episodes of a sci-fi show that never existed. And then giggling at the ingenuity and brilliance of some of the other confessions appearing – of the whole fictional history they bring to life, of Liam Neeson’s run in the 80s or the creepiness of the Laughing Buddas.

It’s really hard to explain the joy (especially as someone fascinated by the inner workings of serial drama) that this brings me. It’s a funny kind of magic – it’s unofficial, didn’t happen on TV, and just relies of fans’ understandings of not only TV shows, but how telly itself works. The results are just brilliant.

I’m off to write my own confession now. There’s always room for one more.