One of the little sadnesses of my current job is I don't have chance to think about odd futures. Speculative powerpoint used to be part of my job but government work is too practical for that. Not a problem, but an occasional regret.
One of the things I used to wonder about was the 'internet of things' and how the obvious narratives might be subverted. I don't have any technical or design expertise to bring to these things but I know what large organisations are like and how they think. And I know what I like. And I've had the rare privilege of hanging out with Alex a lot.
I did a little talk about it for Radio 4 about it back in 2011 and two talks at Next in 2011 and 2012. They ended up being a presentation of two halves. The first was 'What Might Happen When We Put Data in Things', the second was 'There's a Walrus In My Fridge And It Won't Shut Up'. You can probably guess at the progression. You can watch them at those links.
So it's frustrating when I read such splendid things as Tom's write up from FOO that I'm not in the conversation now that it's taking off. Ah well.
If there were any ideas in my ealier talks they were probably these:
1. Before we get to a useful Internet of Things we'll have to get through a Geocities of Things (a term I stole from Andy Huntington) - a period of experimentation, exploration and much that is 'pointless'. Until we get to an internet of cheap nonsense we won't know what we really want.
2. We're not thinking enough about sound. If we want all these things to communicate with us, and we don't want to be starting at screens and they're going to do more than flash a couple of lights, then we need to work with sound. Either 'sound effects' that mean something or devices that talk to us. Personally, I think it'll be the latter morphing into the former. And this is worth thinking about because it's already creeping up on us. Self-serve checkouts are talking at us, reversing trucks are beeping at us, trucks turning left are barking at us, incoherently - all with much less apparent thought and 'design' than we devote to screens.
So, now, my only contribution is - when I get a free evening and a spare neuron - to continue my dumb experiments with things that talk to us.
The parrot at the top was intended to show what a cheap talking thing might be like but it's an illustration, not an experiment, it's not something you can live with. I wanted something that would hint at what it's like to actually live in a talking house. So, I took my very basic js skills and built a website that does things like this:
It does that at irregular intervals during the day, drawing on the People In Space API and sending text to Google Translate to read out the words. I'd point you at the site but I fall foul of some Access-Control-Allow-Origin problem that my coding powers are to weak to defeat.
It's running on an old laptop, sitting at the back of a shelf (one that also powers a dextr screen for more visible radiance.) I wanted the absolute minimum of traditional tapping on a computer, so you can control the volume with the powermate and I wanted the sound to come from 'the house' not 'the computer' so it's plugged into a speaker hanging from the ceiling.
The primary interface is the sound of the thing, most of the time the screen's gone to sleep and you don't see these words, but I'm showing you this video because, oddly, it seems the best way to get you to listen to stuff.
Every now and then it also does this:
And sometimes it does this (there's a Koubachi sensor in the rubber plant):
So, what's it like, living with it?
It's remarkably dull. In a good way. At the moment it's configured to deliver multiple announcements per hour but, most of the time, you don't notice them. They're in the background and you don't hear them (or you don't notice that you've heard them) until there's something interesting that snags your attention. Like when the number of people in space number changed the other month.
I think you could pack it full of information, as soon as I can work out how to connect to other APIs I'll try and do that. You could make it into a continuous background burble, punctuated with news, like radio. You could put some music in there too.
And I think you could cope with more computer-generated text too. At the moment it's all very tightly written, by me, but I think you could stand to listen to more random text from the internet. It's perfectly possible to pull meaning out of ungrammatical nonsense, listening to World Cup commentary proves that - not being mean, just pointing out that natural human speech is far messier than most written things. The ear adapts.
I think, for instance, I could take much greater liberties with the language in Science Story Magic, you could cope with much more nonsense, more machine artefacts. Arguably, I think that's what you want. You don't want a fake person, you want a talking machine. There's also room for some interesting differentiation via the robot voice, I like, for instance the stationality Agogo have built with theirs.
Also, now Aaron's invented a rothko making robot; I want to work out what the sound equivalent of that is.