A new year, and a new toy to begin it.

This all began when Tom started tweeting the prose from the back of a chocolate box.

Tt tweets

One look at that and, having gagged a little on the truly purple prose, there was only one obvious continuation: a machine to churn out chocolate descriptions infinitely.

Which was as good a time as any to play with Markov chains. Wikipedia will explain in more detail, but if you’ve never encountered them, a very rough explanation is: Markov chains are systems that model what the next item in a list will be based on the previous ones. The more previous items you have, the better it can predict the next thing.

They’re often used in toy text generators. You give them source text to seed them, randomly pick a word from the source text, and then start choosing what should come next. What’s nice about this is with nothing other than a piece of maths, and a tight corpus, we can produce things that usually read like English without having to teach a computer something as complex as grammar. Of course, sometimes you get grammatical-yet-nonsensical English out, but that’s hardly in a problem in our case.

So I took the full prose from the back of Tom’s chocolates (Thornton’s Premium selection, for reference), some Markov text-generation code from an illuminating installment of Rubyquiz, and fiddled for a bit.

A short piece of work later and I had Markov Chocolates.

Markov

Roughly once every four hours (but it varies), you’ll get a fresh, tasty new Markov Chocolate in your Twitter feed. It’s another of my daft toys, but it still makes me chuckle. I’m thinking of expanding the corpus soon, and I hear the Markov coroporation are keen to branch out into new product lines. For now, you can get your chocolate fix here.

1 comment on this entry.

  • Tim | 6 Feb 2012

    This really is quite delightful. Thanks for the daftness.