After the success of the Pyramind mixer event in San Francisco in October, the Manhattan Producers’ Alliance, which is based in New York City, decided that this might be a great thing to showcase over there. Since I was in town anyway for the Spectrum release events, and would have my BCMI equipment with me (I was due to perform the demonstration in the East Village at Spectrum over the weekend), it seemed like a great opportunity.
The format of this demo/presentation was always likely to be different to the Pyramind event, due to the fact that we had arranged for it to take place at Dolby Labs in midtown Manhattan. Instead of a studio complex, this was a screening room, and as such was more geared towards a lecture-style event than an informal, hands-on walkthrough.
With this in mind, I decided to preface the demo with a broader discussion of ‘where are we going with this’? It seemed appropriate to try and address the broader context of BCI/BCMI, especially in light of who the audience was. Since this was taking place as part of the Audio Engineering Society‘s quartetly meeting, there were a lot of engineers, producers and musicians, and slightly less in the way of Bay Area tech-nerd types. I decided to alter my presentation accordingly, to discuss how brain computer interface technology might be applied in the future not just to games (which is, to pardon the pun, a no-brainer) but potentially to film and other linear media, as well as in a therapeutic context.
And so, after rambling on for some time about this, I handed over to my mad-scientist friend and colleague, Dr John Long, researcher at NYU and fanatical robot-builder. While he extolled the virtues of Brain Machine Interfaces and even delved into a little philosophical conjecture on the subject, I strapped myself into the Emotiv headset, ready to do some serious sound-shifting in this wonderful theater.
(Cut to post-performance) Although the demonstration went ‘well’, and certainly generated no end of questions from the audience (so much so that I think we kind of overstayed our welcome!) – there were a few, very simple things that I feel I could have, and should have, done differently. Speaking to an audience member at length afterwards over cocktails confirmed this. “You should be the performer – the star” – he said, and he’s right. There I was, controlling this cool brain music system, and I was tucked away in the corner with no light on me, mumbling explanations of what I was doing that no-one could hear.
Instead, I should have thrown off my ‘presenter’ jacket, marched up with confidence to the ‘sweet spot’ seat in the surround theater, and hooked myself up to a microphone. Sitting amongst the audience would have created a better connection, and the microphone would mean I could mumble instructions (too much projection or articulation would have made for noisy EEG signals) that could be heard by everyone.
I’m certainly planning on developing this 2-part presentation/demo, and I think the ‘lecture’ part really got people thinking about the possibilities. That said, I fully recognize that they also need to get turned on by the idea, and being a confident performer who knows how to put on a good show for the audience will likely get me 99% of the way there.