All posts by Julia Wong

Hello Nature

Hello Nature – a poetic exploration of what the nature is communicating to us in the form of a audio-visual installation art.

By Joseph Mallonee and Julia Wong

final

 

Idea development

We started off being interested in how we can turn nature sounds (birds, crickets, trees…) into short and long dashes so that we could begin to deconstruct nature’s language through Morse code. We were curious of the fact that some people find it easier to compose songs after the lyrics were made, and some vice versa – we wanted to use the melodic nature sounds and see if we could create any meaningful and poetic lyrics.

Final concept and the making of

We eventually came to the idea of creating an installation piece, where we analysed the pitch and brightness of the nature sounds, and created a 2D map using Max MSP. We used it to map each of the 26 alphabets to the sound, so that a specific range of brightness and pitch would match to a specific alphabet character. We then attempt to show those alphabets on screen (in a form of flickering images) by connecting the sound to jitter. We used mathematical expressions and scale to create float outputs, which would control which alphabet image to display on jitter. There are five tracks, linking to five different videos reacting to the respective track and playing at the same time.

Fabrication and Interactive elements

The whole installation include a visor made out of many white-painted branches. The audience can see the video of alphabets through the middle hole of the visor. Apart from just watching the video, we want the audience to be able to play with the tracks and see the ‘lyrics’ from nature manifest on screen. So on the branches visor, there are five golden little branch knobs, one for each track. When you turn them, we aim to have the volume of the respective track increase/decrease, and thus have the video of alphabet reacting to that specific track to change in scale. Joe used Maxuino to send messages from the knobs to our Max patch, and we managed to change the volume, but still have some trouble with the scaling of the video.

Result

We still have some issues to resolve, but the resulting installation is very beautiful. There are times when the five videos show alphabets that would spell a word. If we manage to randomize the positions of the alphabets, we would be able to see clearly what the images are and the connection with the audio will be a lot stronger.

Video:

Max patch:

jitter video set up

Screen Shot 2015-05-09 at 2.52.26 PM

audio analysis

Screen Shot 2015-05-09 at 2.52.04 PM

 

final maxpatch

 

finalMax

 

 

Alphabet characters created using objects collected from nature. Photos taken by Julia.

 

 

alphabets

All hell breaks loose

Final Video: https://www.youtube.com/watch?v=4cSU0eZ_1-8

Performers

Chung Wan Choi: Composition, Frame Drum

Tyler Harper: Bassoon, Bassoon Reed

Julia Wong: Face

 

Technical

John Mars: Face Tracking App, Lighting

Steve Chab: Audio Engineer

 

Chung, Tyler, John, Steve and Julia was a group with a unique mix of skills and interests. Since most of us play an instrument in one way or another, our initial ideas started with the concept of juxtaposing and contrasting live acoustic performance with electronic sound effects. Originally we wanted to make a piece where contact mics on cardboard or random objects trigger MIDI instruments in Ableton Live. Then John came up with the idea of controlling Live effects with OSC data triggered from a facial recognition app. So in the end we decided to have Tyler on the bassoon, Chung improvising on the hand drum and Julia responding to both performances and controlling the Live effects by making faces in front of the face-tracking app.

We decided to have Tyler play the solo without any effects the first time, with the lights only on his face. After the lights went off in the first round, Chung and Julia came in with their improvisations. Tyler switched to using his bassoon reed to create loud cacophonous sounds. This is when we all went wild and all hell broke loose.

Since Tyler loves octatonic scales on the contrabassoon, Chung composed a short solo for it. Here’s the score and the audio track she exported from Finale.

Score: Contra

Audio track exported from Finale:

 

Steve set up all the instruments and effects in Ableton Live, then used Livegrabber to retrieve OSC data from John’s facial recognition app.

Notes & Peaks: makes MIDI notes from audio peaks

Live plugins: vocoder, grain delay, reverb, limiters

Max for Live plugins: threshold, notes

Screenshots:

1

2

3

4

Screen Shot 2015-04-01 at 8.50.14 am

Screen Shot 2015-04-01 at 8.50.21 am

John was the programmer for our face-detection app. Facestrument is an app for OSX (heavily based on kylemcdonald/ofxFaceTracker and FaceOSC) that outputs OSC data according to facial gestures as seen by a webcam. The app tracks a single face with computer vision, analyzes it, applies a mesh, then sends out data for each feature on the face, for example the width of the mouth, or the amount of nostril flaring. We also mapped the data so that the values are all between 0 and 1, such that it is easier to control the Live effects. The face is represented on screen as a caricature with a simple bi-colored, thick-lined outline.

This is what we see:

(gif – click on it)

faceDetection face3 face2 face1

 

Here’s Steve’s test on using audio peaks and turn them into MIDI notes. He hooked those notes to a drum machine and played a beat on his forehead with guitar plugs

Audio peaks/threshold to MIDI note test

Before we got the face detection app to work, we were trying it out on TouchOSC.

Bassoon controlled by TouchOSC on Steve’s phone

IMG_1618

IMG_1617

IMG_1616

IMG_1610

 

Arctic Aquatic

 

Diving deep down to the sea to see ratcheting skeleton fish, cold undersides of icebergs, and impenetrable darker and darker blue: submergence.

View our video at: https://vimeo.com/118933345

Our sound collage piece explored manipulating the interior sounds of a library, a symbol of human knowledge and comfort, into sounds that conveyed an unknown, cold, natural environment. As we manipulated the sounds we recorded in Max MSP and Alexander affected the edits in FL Studio, we uncovered beautiful and haunting sounds that conveyed an experience of descending as a lone diver into ice-covered waters of a frigid, dark, ocean.

 

Screen Shot 2015-02-02 at 1.15.15 AM

Download Max 7 Patch & Audio Files: https://www.dropbox.com/s/vitzuglb126l04l/Arctic%20Aquatic.zip?dl=0

 

When we were scouting for sounds on the first day of this project we followed our ears rather than a concept. Searching organically enabled a level of play that planning in the beginning would have stifled, and it also got us acquainted to the personalities and tastes that composed our group. We first wandered into Hunt Library’s cafe where we were lucky to catch the ringing of a running espresso machine, then turned the corner to record the sounds of a printer. We then went up to the fourth floor and experimented with running bookends against the built in slot-shelving, giving us strong scraping and clicking sounds. Afterwards we mashed keyboards as a group.

As we started reviewing the files and editing them, both individually and as a group, somber and cold sounds began to emerge. We also came to realize we could expand the interest of our piece by adding some sounds to the mix like Tyler playing his bassoon and producing multiphonic sounds using the bassoon reed. Alexander composed a short piano piece that can add a haunting but beautiful tone to the otherwise dark and raw sounds. But because we had to derive the sounds from field recordings, Alexander manipulated sound that came from striking a pair of scissors against a table leg, creating a beautiful chime. All of the members of our group took turns in editing files in Max MSP, and tweaks were completed by Alexander in FL Studio.

Once we had all of our sounds together Tyler brought them into Max MSP and acted as the operator, directing the sounds as well as creating the real functionality of the patch. Every group member contributed to the composition as we built the piece organically, but Tyler and Alexander’s ear for sound were tuned into the details and made it spectacular sonically. We all took part in arranging the direction and movement of sounds. Alexander continued editing and cleaning up sounds as we worked and discovered new possibilities.

 

“Affected Coffee” – Hunt Library Cafe Espresso Machine

Created by Alexander Panos

 

“Ambient Typing Pad” – Keyboard Typing

Created by Alexander Panos

 

“Processed Door Ambience” – Hunt Library front foors and typing

Created by Julia Wong

 

“Reversed Bassoon” – Multiphonic sounds on bassoon, reversed 

Created by Tyler Harper

 

“Metal Melody” – Scissors against table-leg 

Composed by Alexander Panos

 

“Reversed Metal” – Scissors against table-leg, reversed for conclusion.

Created by Alexander Panos

 

10951915_773031502791593_1735897104_o (1)

10954459_773031499458260_580403391_o (1)

While Tyler finished the final patch arrangements in Max MSP Julia and Joe started to brainstorm concepts for how to share the themes of the piece in a presentation. Joe suggested hanging simple black and white pieces of prints on the speaker cords and playing with the physicality of the space. Eventually we decided on directly incorporating snow and ice. Originally we planned to have the audience simply view melting ice, but we quickly moved to more individual, tactile experiences for the audience. Julia and Joe grabbed some snow from outside and tested to see how long it would take to melt in one’s hands in accordance to the piece. We felt this really complimented the collage in an almost uncanny way: the physical sensation matched the feelings of the sound directly. We planned to incorporate a petal into the middle of the snowballs, when we moved to ice we thought about adding mints to the centre.

Unfortunately there were some issues with holding the ice. It caused the floor to get wet and had unpredictable melting times. We needed more control over when it would melt. After discussing alternatives the conversation led to Joe’s suggestion of placing ice-cubes on the tongue. This kept both the physical and sonic sensations in the head, and we agreed as a group it was a good route to go. We tested it a couple of times and found ice cubes of a perfect size that melted at the end of the first part of the piece. The sensation stayed with the listener and set the tone for the rest of the sound collage.

alexander1

Figuring out the perfect set-up for an optimal experience

tyler

 

Tyler as our Max operator

 

alexander2

 

It was really late at night and we were having fun

After our performance of the piece on Monday, these are some of the feedback we got:
“the ice was a brilliant idea, and helps color   the whole experience even after the ice has melted”
“allowed for more imagination”
“kept the feelings in the head”
The audience felt that the sound “was cinematic,” and they wondered how we had created the ambience of the piece.

Alexander Panos and Tyler Harper: Sound editor and sound designer
Joe Mallonee and Julia Wong: Director and Scribe

We all worked together and shared ideas, but our respective skills came in handy in our different roles.