Category Archives: Uncategorized

Hello Nature

Hello Nature – a poetic exploration of what the nature is communicating to us in the form of a audio-visual installation art.

By Joseph Mallonee and Julia Wong

final

 

Idea development

We started off being interested in how we can turn nature sounds (birds, crickets, trees…) into short and long dashes so that we could begin to deconstruct nature’s language through Morse code. We were curious of the fact that some people find it easier to compose songs after the lyrics were made, and some vice versa – we wanted to use the melodic nature sounds and see if we could create any meaningful and poetic lyrics.

Final concept and the making of

We eventually came to the idea of creating an installation piece, where we analysed the pitch and brightness of the nature sounds, and created a 2D map using Max MSP. We used it to map each of the 26 alphabets to the sound, so that a specific range of brightness and pitch would match to a specific alphabet character. We then attempt to show those alphabets on screen (in a form of flickering images) by connecting the sound to jitter. We used mathematical expressions and scale to create float outputs, which would control which alphabet image to display on jitter. There are five tracks, linking to five different videos reacting to the respective track and playing at the same time.

Fabrication and Interactive elements

The whole installation include a visor made out of many white-painted branches. The audience can see the video of alphabets through the middle hole of the visor. Apart from just watching the video, we want the audience to be able to play with the tracks and see the ‘lyrics’ from nature manifest on screen. So on the branches visor, there are five golden little branch knobs, one for each track. When you turn them, we aim to have the volume of the respective track increase/decrease, and thus have the video of alphabet reacting to that specific track to change in scale. Joe used Maxuino to send messages from the knobs to our Max patch, and we managed to change the volume, but still have some trouble with the scaling of the video.

Result

We still have some issues to resolve, but the resulting installation is very beautiful. There are times when the five videos show alphabets that would spell a word. If we manage to randomize the positions of the alphabets, we would be able to see clearly what the images are and the connection with the audio will be a lot stronger.

Video:

Max patch:

jitter video set up

Screen Shot 2015-05-09 at 2.52.26 PM

audio analysis

Screen Shot 2015-05-09 at 2.52.04 PM

 

final maxpatch

 

finalMax

 

 

Alphabet characters created using objects collected from nature. Photos taken by Julia.

 

 

alphabets

A Singularity

For my final project, I wanted to find a way to combine many of the field recordings I had made in addition to other, much longer sound recordings and experiments I had done over the semester into one final piece.  These recordings consisted of sounds in my studio (an hour long recording of the radiator banging), moments from a Brief History of Time (mostly recordings of Steven Hawking), recordings from the Bayernhof Museum, and live and altered recordings from the robot project I was a part of (recorded afterwards and independent of the scope of the initial project). I had been playing with these sounds for several weeks in Ableton Live and had become interested in the mood and message created when I combined them together.

While I had initially planned to build another robot to play along with the recordings -(triggered by motion or movement) and held within several sculptural containers, once I started adding the recordings into Ableton Live and setting them to the Alias 8 controller, I realized I wanted to be able to have more control over the sounds. After going over some of the past posts Jesse made related to the Kinect, I recalled that when I first came here, one of the initial projects I had wanted to  create was a body controlled sound piece. I had done some explorations with the Kinect before, but hadn’t used it with Max and was excited about the possibilities with Ableton Live through using Synapse. Unfortunately, Synapse can only be ran with the first generation of Kinect (which I thought I had) – so the computer in the sound lab was showing missing areas in Synapse when I tried to run it. The program that offered another way to use Synapse was this one and seemed fairly easy to get up and running:
. Unfortunately, I had issues with that one as well.

Shifting gears, I instead used the dp.kinect external that I had already installed (and reworked the patch). I then added some of the new vizzie effects in Max 7 along with the subpatch that allows changes in amplitude to alter both the delay and zoom of the visuals. After much trial and error, I set the parameters of the kinect so that the delay allowed for a jump in time and place of the captured body to correspond with the mood and message of the samples and effects I had created in Ableton Live.

singularity ableton live patch

singularity max7 patch screenshot

Version of the final performance/installation:

Hopefully I can replace that with a better version tomorrow.

time and space and a test-site

collaboration between three fine art MFA students, a violinist, percussionist, two composers and a drama student.

two new film scores were composed and performed live amidst an enclosing projection of broken an repeated narrative.

the 45 minute concert was begun with a slow, uncertain, humming duet of
Also Sprach Zarathustra by Richard Strauss (written as the theme for 2001: A Space Odyssey, intersected by the reading of a letter written in the case of apollo astronauts being forever lost in space.

 

Journey to the moon was re-enacted through a cut up quilt of a projection while a new live film score was performed on violin.

 

Marimba accompanied a performed text piece, where I projected footage from the moon landings alongside a projection of my live transcription of the astronaut’s dialogue.

 

 

deep listening inside of a Richard Serra sculpture. It poured that day.

a human-made bell, we sustained this for 7 minutes.

Screen Shot 2015-05-06 at 4.11.33 PM Screen Shot 2015-05-06 at 4.16.10 PM Screen Shot 2015-05-06 at 4.14.57 PM Screen Shot 2015-05-06 at 4.12.33 PM

 

 

serious moments of harmony/discord/pain/tiredness/resonance

Sound in Motion

SOUND IN MOTION

Maya Kaisth and Clair Chin

PROJECT DESCRIPTION: This project is comprised of four interactive components. A tree supported rope swing, a sound recorder connected to headphones, an LED light strip, and infrared proximity sensors.We were able to set up proximity sensors that output information to an arduino, our arduino then controlled when the LED strip would light. Once a user of our installation was close enough to our sensors the lights turned on. Users of the installation began to swing while listening to the sounds of wind rushing through the headphones connected to the swing.

PROJECT OBJECTIVES: Our hopes in making this installation were to create a new experience by using a familiar action. This installation combines auditory and visual experience with movement made by the user. By amplifying sound we were able to heighten the experience the user had of swinging. The interactivity of lighting that was included in our installation was meant to visually attract and appeal to curious users approaching our piece. We were pleased we were successfully able to integrate technology into this commonplace and nostalgic action in a way that in meaningful to it.

IMG_0979

IMG_0973

 

Screen Shot 2015-04-28 at 10.45.32 PM

From the Middle Outwards

From the Middle Outwards is a live performance based on the theme of discovery by Amber Jones, Joe Mallonee, Jack Taylor, and Alexander Panos.

Amber Jones performed with a violin and also did sound editing.  In addition to sound editing, Amber worked with Jack to compose a melody to play on top of Alexander’s background track.

Joe Mallonee created and performed the live visuals, wrote the lyrics, and gave feedback.

Jack Taylor performed the guitar and lyrics and also created the main motif that was played throughout the performance.

Alexander Panos created the backing track, edited sounds, and mapped out the overall flow of the performance.

The piece was composed in 7 parts with layers gradually added. Breakdown sections were scattered in until the 7th section where the totality of sound is distorted, builds in intensity, and ends.

10609487_804326906308933_8501701729125787282_n

11045397_804327356308888_2686701757653239674_o11050799_804326909642266_8805263627588738111_n

11103796_804308776310746_2104395421_o

11114142_804326912975599_4683654983448717436_o

11131065_804308779644079_1167451389_o

Amber hooked up contact microphones to the bridge of her violin and to the sound post of Jack’s guitar. These inputs were run into Ableton Live where Amber placed distortion effects on Jack’s guitar and an EQ, reverb, and a grain delay filter on her violin.

11086390_811675752260501_850437514_o

11096933_811675762260500_1626706169_o

11101689_811675748927168_2120484517_o

Alexander Panos created and edited recordings along with presets in FL Studio in order to achieve the backing track and connect it with the theme.

Nature Sounds Interacted

Team Members: Yury, Zhiwan, Brittany, Gwen

Our performance was based off of using field recordings of nature (animals, trees, etc.) combined with interactive visuals and a live performance of poetry. The goal was to create a free-flowing, avant-garde performance that was never the same any two times performed.

Roles:

Yury Merman: I found a myriad of high-quality nature sounds online (mainly soundcloud), and chopped up the samples. I processed many sounds as well, either with electronic/synthetic sounding effects, or with mixing effects such as EQ and filtering.

I used Logic Pro for the sound design and editing, and then used Ableton to perform and trigger the sounds.

I had various nature sounds on differing tracks and they would play at random on their own. I could also control the triggering of the sounds, switching up parts sort of like a DJ/producer would do so with a standard electronic track.

I had an outline of the audio structure where it would start slow, build up, have many differing sounds, and then vary throughout until the end of the performance, during which the sounds became more minimal.

Ableton was also connected to MaxforLive, which allowed the audio to trigger effects on the visuals we displayed during the performance. Based off things such as frequencies and intensity, the visuals would change in ways such as frame-rate and color/filters.

 

Zhiwan: Created a Max patch that would allow the audio to trigger the visuals. He also had control all the visuals manually, changing them in real-time during the performance as well.

 

Brittany: Provided video that we used for the visuals, and also performed poetry, samples of which I’ve provided below:

a blindmans eyebrows
condensing the autumn fog
into beads of light
squeezing his eyes shut,
the cat yawns as if about
to eat the spring world.
black winter hills
nibbling the sinking sun
with stark stumpy teeth.
All the haikus are by Richard Wright, an american poet, who wrote these during his last months of life.
Gwen: Also provided visuals and performed the last poem as well. She also lit matches, which gave our performance a more primal aesthetic (Fire = nature).

 

 

 

 

 

 

 

Buried by Shlohmo (Maya and Clair

Maya Kaisth and Clair Chin

The concept for this video started from the idea of creating a visual representation of motion within music. We envisioned using the body to create abstractions that would respond to music, and decided to make a piece that we could potentially play in relation to music. In the future, we hope to take the footage we created and relate it to different pieces in projected live performance.

All hell breaks loose

Final Video: https://www.youtube.com/watch?v=4cSU0eZ_1-8

Performers

Chung Wan Choi: Composition, Frame Drum

Tyler Harper: Bassoon, Bassoon Reed

Julia Wong: Face

 

Technical

John Mars: Face Tracking App, Lighting

Steve Chab: Audio Engineer

 

Chung, Tyler, John, Steve and Julia was a group with a unique mix of skills and interests. Since most of us play an instrument in one way or another, our initial ideas started with the concept of juxtaposing and contrasting live acoustic performance with electronic sound effects. Originally we wanted to make a piece where contact mics on cardboard or random objects trigger MIDI instruments in Ableton Live. Then John came up with the idea of controlling Live effects with OSC data triggered from a facial recognition app. So in the end we decided to have Tyler on the bassoon, Chung improvising on the hand drum and Julia responding to both performances and controlling the Live effects by making faces in front of the face-tracking app.

We decided to have Tyler play the solo without any effects the first time, with the lights only on his face. After the lights went off in the first round, Chung and Julia came in with their improvisations. Tyler switched to using his bassoon reed to create loud cacophonous sounds. This is when we all went wild and all hell broke loose.

Since Tyler loves octatonic scales on the contrabassoon, Chung composed a short solo for it. Here’s the score and the audio track she exported from Finale.

Score: Contra

Audio track exported from Finale:

 

Steve set up all the instruments and effects in Ableton Live, then used Livegrabber to retrieve OSC data from John’s facial recognition app.

Notes & Peaks: makes MIDI notes from audio peaks

Live plugins: vocoder, grain delay, reverb, limiters

Max for Live plugins: threshold, notes

Screenshots:

1

2

3

4

Screen Shot 2015-04-01 at 8.50.14 am

Screen Shot 2015-04-01 at 8.50.21 am

John was the programmer for our face-detection app. Facestrument is an app for OSX (heavily based on kylemcdonald/ofxFaceTracker and FaceOSC) that outputs OSC data according to facial gestures as seen by a webcam. The app tracks a single face with computer vision, analyzes it, applies a mesh, then sends out data for each feature on the face, for example the width of the mouth, or the amount of nostril flaring. We also mapped the data so that the values are all between 0 and 1, such that it is easier to control the Live effects. The face is represented on screen as a caricature with a simple bi-colored, thick-lined outline.

This is what we see:

(gif – click on it)

faceDetection face3 face2 face1

 

Here’s Steve’s test on using audio peaks and turn them into MIDI notes. He hooked those notes to a drum machine and played a beat on his forehead with guitar plugs

Audio peaks/threshold to MIDI note test

Before we got the face detection app to work, we were trying it out on TouchOSC.

Bassoon controlled by TouchOSC on Steve’s phone

IMG_1618

IMG_1617

IMG_1616

IMG_1610

 

Musical Simon

Maya Kaisth, John Mars, Tyler Harper

The  concept for our robotic instrument started with us considering creating an instrument in response to another instrument. In the case of Musical Simon, the instrument used could be whichever instrument used by the player, including their voice. Since Simon is a memory game, we decided that our instrument should be a game as well. We created an application.

In terms of design, we decided to use the pentatonic scale- notes CDEGA.

Screen Shot 2015-03-01 at 6.23.07 PMScreen Shot 2015-03-01 at 6.23.23 PM

We decided the application should have 2 versions, one where the player hears the notes and views their letter, and another where no letter appears thus creating a harder version of the game.

Screen Shot 2015-03-01 at 6.16.05 PM

 

Screen Shot 2015-03-01 at 6.16.23 PM

We ran into a few different interface problems after completing this design, so we ended up changing it.

(https://vimeo.com/120982477)

In the future, we would like to fine tune the way the app interprets and states sound.

Code for app available here: https://github.com/marsman12019/simon