For the past year and a half or so, I have been quietly working with a few outstanding folks to create some mobile apps that aim to personalize the music experience in a way that hasn't been done before. We created some prototypes, tested them out with the public, and gained some learning about what works and what doesn't.
Our first mobile app, Frequencer, is a form of musical guided meditation that uses the Neurosky brain computer interface technology as its core sensing mechanism. The app detects a user's state of calm or focused attention, and selects music tracks to play accordingly. Although very much of a proof of concept at this stage, the app works, and it is actually pretty satisfying. We recently added accelerometer data as an input point, so if you don't have a brain computer interface handy, then not to worry - fun can still be had...
From these inauspicious beginnings we have developed the concept significantly, and have created a few desktop prototypes that are really quite mindblowing (pun intended). Of particular note is a version of the app that allows the user to jam along, in key and in time, with their playlist just by thinking about it. More to come on that as we progress...
I have finally taken some time to put a reel together of the videogame and other music I have created over the past few years. There is a smattering of sound design in there as well, but mostly music. Much of this content is music that I wrote while at Leapfrog, but there are a few other snippets from other projects as well:
I don't write much of this kind of music any more, which I have mixed feelings about. It was always an intellectually stimulating challenge to create the right sound to fit the product, and I certainly miss this aspect. On the other hand I am now focusing on some broader goals that speak to the cognitive underpinnings of musical experience, so hopefully my work as a creator of fitting emotional correlates will help in this regard. More to come on that side of things in a future post...
Over the past few months, neuroscientist, math-art-rocker and all round genial dude Robert Gibboni and I have been working on an installation for the latest incarnation of Mozart and the Mind in San Diego. I have found myself in an odd position whilst preparing for this - since the entire installation is coded in Python, and me not being wise in such ways (yet), I have been assuming a different role in the creation/curation process, which is that of artistic director.
Simply put, 'Cocktail Part A' is an interactive installation that translates speech signals into music. The algorithm strips out semantic content from the voice pattern, selecting samples based on harmonic or formant content, and applying a range of filters and other forms of DSP such that the original linguistic material is discarded, and replaced with pitch and rhythm correlates.
Four participants engage in conversation, which is sampled using Android phones, and streamed to the processing software. At the same time, a 'host' wearing an EEG headset by Muse, controls the mix of the voices and certain effects according to the levels of attention and meditation detected, which are processed in the cloud using Syntrogi's Neuroscale platform.
It remains to be seen how 'musical' the output of this installation will be, but as a human social experiment it is bound to be fascinating.
Ms Cuckson is an astonishing violinist. Of this there is no doubt. Fearless and visceral, her music cuts through the air like a knife from whose wound you wish to slowly expire. I had the privilege of working on a second album with her, recently released by Urlicht Audiovisual. It has been very well received so far, with some really quite wonderful reviews. We should expect to hear more great things from Miranda as her career continues to blossom.
Recently I was invited by my good friend and colleague Tim Mullen, CEO of Syntrogi Labs in San Diego, to contribute to a chapter for a new book to be published by Springer Press next month, entitled "More Playful User Interfaces". This promises to be a deeply engaging rundown on the confluence between humans and technology and the myriad forms that this interaction takes. For our part, we discuss Brain Computer Music Interfaces, as presented at the esteemed Dr. Mullen's "Mozart and the Mind" events at the Scripps Institute at the University of California, San Diego.
This was my first book contribution, and it was a thrill to write about my experience blending traditional compositional forms and the emerging field of biomusic. My goal with these systems is very much 'musification' - going beyond purely sonifying biological data, and it is clear I am in good company judging from my co-authors' contributions.
More updates as the release goes to press in a few weeks...
Bringing together a surround ethos from the worlds of games, film and contemporary classical music, I was honored to be included in this masterclass roundup of sound techniques in Electronic Musician magazine last month. Interesting reading, to be sure. The upcoming release I mention from Urlicht in June of a new CD by Miranda Cuckson will be in dual format - both stereo and Bl-Ray 5.1, and is an astonishing work by this amazingly accomplished musician.
At the end of last year, KQED San Francisco ran a local story featuring our NeuroDisco project - it was a thoughtful, timely piece that starts to poke at the question of how we in the 21st Century are using increasingly applying technology to our innermost thoughts and emotions, as well as using it to fly drones with our minds. Heady stuff, to be sure - and a few weeks ago NPR picked up the story, which you can listen to here.
NeuroDisco has had a good run - we have presented at a bunch of conferences, exhibitions, evening events and even holiday parties. But now it is time to move on to new and decidedly more complicated things. Even so, I am glad to see this relatively recently minted technology making it somewhat into the mainstream. My current headset manufacturer of choice, Emotiv Systems, is in the final stages of production for its second-generation EEG device, the Insight. At a recent meeting with them in San Francisco, CEO Tan Le informed me that their Kickstarter campaign for the new product had not only smashed all fundraising expectations (1643% of target), but that the Insight occupies the #2 position in the Product Design category. That is to say, the second-best design of ALL the designs of anything anywhere. That's pretty impressive, and speaks to the potential of this technology to establish a significant installed base.
Neuroscientists will tell you that these headsets are basically toys that you can have some fun with, and they are right. I for one am making no grand claims to scientific progress. What I can say though, is that - blessed or cursed as I am with a lack of scientific rigor, my creative vision for how this technology can be employed in a way that has tangible meaning and resonance for those who experience it holds no bounds.
And so on to the next phase - I believe in the next few months I am going to understand more about coding, modeling and synthesizing than I ever wanted to, but I will be the better, and possibly calmer, for it.
This is the latest on my duet for Disklavier and Brain - tentatively titled 'Spukhafte Fernwirkung'. I am using the Emotiv EPOC 16 channel EEG headset and running the brain signal via my interface in Max/MSP to the Disklavier. Part of the piece is composed, and the brain signal kicks in at 2 minutes, from which point I am improvising with the material.
Ok. Let me explain. It is certainly not out of a particular love for the German language that I choose this appelation. It is more the idea of 'spooky action at a distance', a term once used by Einstein to describe quantum entanglement (albeit somewhat disparagingly in that context). My firing neurons are contributing directly to the musical shape of this piece, still very much in development but coming along nicely. The action is certainly spooky (and yes, the mechanical pun as related to the piano keyboard is intentional - I blame my punning wisdom on my father), not least because it seems to be following, mimicing or otherwise commenting on my internal thought process, and in a very visceral and intuitive way.
I gained two takeaways from this mini-performance/demo - one was a pounding headache (interestingly, somewhere around my temporal lobe) and the other was a profound sense of the physicality of sound. While building my interface for the brain signal to be turned into music, I was using my electronic keyboard. In that scenario, everything about the interaction between musical instrument and brain was calm, serene, almost scientific. But when faced with the beast that is the Yamaha Disklavier, with all its associated resonances and physical sound projection, not to mention the tactile phenomenon of the keys depressing themselves at my cognitive whim, I became rather discombobulated. Something about the physical feedback experience, coupled with the extra dimension of biofeedback, gives rise to an intimate, singular sense of time and space, an awarenesss of the push and pull of the visceral and the breaking down of boundaries between that mind/body duality we have become so accustomed to.
Last Monday, my wife and I went to the offices of KQED for an interview and demo with Amy Standen, their news reporter for the Science Mondays segment. We had planned to have Amy wear the Emotiv and play around with the music application we had developed called NeuroDisco. Alas, too much hair prevented Amy from rocking out. So it was down to me. There is nothing quite like being in a radio station recording studio and being asked to be calm and meditate in order for the reporter to discern the change in musical output. The meditation detection is pretty hard to achieve, but I just about managed it for long enough for it to be statistically significant.