Final Video: https://www.youtube.com/watch?v=4cSU0eZ_1-8
Chung Wan Choi: Composition, Frame Drum
Tyler Harper: Bassoon, Bassoon Reed
Julia Wong: Face
John Mars: Face Tracking App, Lighting
Steve Chab: Audio Engineer
Chung, Tyler, John, Steve and Julia was a group with a unique mix of skills and interests. Since most of us play an instrument in one way or another, our initial ideas started with the concept of juxtaposing and contrasting live acoustic performance with electronic sound effects. Originally we wanted to make a piece where contact mics on cardboard or random objects trigger MIDI instruments in Ableton Live. Then John came up with the idea of controlling Live effects with OSC data triggered from a facial recognition app. So in the end we decided to have Tyler on the bassoon, Chung improvising on the hand drum and Julia responding to both performances and controlling the Live effects by making faces in front of the face-tracking app.
We decided to have Tyler play the solo without any effects the first time, with the lights only on his face. After the lights went off in the first round, Chung and Julia came in with their improvisations. Tyler switched to using his bassoon reed to create loud cacophonous sounds. This is when we all went wild and all hell broke loose.
Since Tyler loves octatonic scales on the contrabassoon, Chung composed a short solo for it. Here’s the score and the audio track she exported from Finale.
Audio track exported from Finale:
Steve set up all the instruments and effects in Ableton Live, then used Livegrabber to retrieve OSC data from John’s facial recognition app.
Notes & Peaks: makes MIDI notes from audio peaks
Live plugins: vocoder, grain delay, reverb, limiters
Max for Live plugins: threshold, notes
John was the programmer for our face-detection app. Facestrument is an app for OSX (heavily based on kylemcdonald/ofxFaceTracker and FaceOSC) that outputs OSC data according to facial gestures as seen by a webcam. The app tracks a single face with computer vision, analyzes it, applies a mesh, then sends out data for each feature on the face, for example the width of the mouth, or the amount of nostril flaring. We also mapped the data so that the values are all between 0 and 1, such that it is easier to control the Live effects. The face is represented on screen as a caricature with a simple bi-colored, thick-lined outline.
This is what we see:
(gif – click on it)
Here’s Steve’s test on using audio peaks and turn them into MIDI notes. He hooked those notes to a drum machine and played a beat on his forehead with guitar plugs
Audio peaks/threshold to MIDI note test
Before we got the face detection app to work, we were trying it out on TouchOSC.
Bassoon controlled by TouchOSC on Steve’s phone