Sound in Motion

SOUND IN MOTION

Maya Kaisth and Clair Chin

PROJECT DESCRIPTION: This project is comprised of four interactive components. A tree supported rope swing, a sound recorder connected to headphones, an LED light strip, and infrared proximity sensors.We were able to set up proximity sensors that output information to an arduino, our arduino then controlled when the LED strip would light. Once a user of our installation was close enough to our sensors the lights turned on. Users of the installation began to swing while listening to the sounds of wind rushing through the headphones connected to the swing.

PROJECT OBJECTIVES: Our hopes in making this installation were to create a new experience by using a familiar action. This installation combines auditory and visual experience with movement made by the user. By amplifying sound we were able to heighten the experience the user had of swinging. The interactivity of lighting that was included in our installation was meant to visually attract and appeal to curious users approaching our piece. We were pleased we were successfully able to integrate technology into this commonplace and nostalgic action in a way that in meaningful to it.

IMG_0979

IMG_0973

 

Screen Shot 2015-04-28 at 10.45.32 PM

Mushroom synthesis

I heard you like mushrooms synthesizers in your audio-visual synthesis engine, so I put a mushroom synthesis engine in your engine so you can synthesize mushrooms while you synthesize.  Based on this.


----------begin_max5_patcher----------
2359.3oc4ck0aaarE9Y6eECXKtnMMVgyJIySoEE2GtOTTTf9TQgAkzXKFHRp
RR4kaQyu8xYgZwQb7PZwwVTAIxhKl7b93Y4aNyYX96KuvaZ9C7ROvGA+A3hK
96Ku3B4tD63B81W3kF+vrkwkxSyaVdZJOqx68piUwenRt+eJtjOGjmApVvA2
G+3U4qq.UqqxKRhWB9Ofe8mAqhqls.L8Qv+iWjOkGmB9u7r47hx7rOBVTUsp
7ie3COV+6sdxT9GJp9MF9J9re+WxatYKSx3yxWmIuiD8Nmtd5zk7U4IpcCm3
qOP15zjrk7JobC06TJCIY2dcAeVkRuCY0+J.bf3SXDV7i.zDevet2MnLYNWb
9X8dSlKU67oe9JD1a6srV7atm96cADHUZbwsIYhiEoOVY0iKkWWOu8NaoLK1
w+b4khOdukOdVlbGexswIYeo45cWbQVbJukitGHgZGjvHAvBXHIXoPIb3Nnz
N3QzggC5lqdQs7TwKtlmE2npMvwh76aj1F3q1Bp1fKtJIOa2SUckqdbEWIgd
kI2lEuz6869Mw+tYYdbk3KKSJq0lFAdUAur45tih5KzL4GT4mHxNJYY7c74W
GWUUjLccEe62J0ORzOSDn9x077aZ1cy92U6Wlmca6OY16TqgkhJKO2FDw+.G
KMUY8cUf+jCe7XowE6fGMIKop1Y9qercnyRglWAoMf2dmz55SZioOQcbokdy
O5nYeF+9ZKuuJnTEXJ3FuNFOfFPkw.hj13nIzZi.zgszosXp23HcSd11Ga+n
HRnWaVuSiytcq05VKtuJ9f3ZVl7+U3esb0CzpsX32kvuuFK.0hR8MsqvFiIf
JruLPAT6832BvEZNjY6.mSAj763EwKWBjW9NhGXVzjfZ.gBo1.HAuo.jZqvZ
4nypLjHCbpzXsB2lqieeccNT9C+1bqDtTRWKidTEowRUfM7lTkq374cEXYJN
JDYHoPrICIB5jvy5a.oqKWTjmmV1UvfDgrELvj2TfQKYopi3OOkWVBPcFJHR
nP41EfL4ugXGFJfcOU0.mcpET5ceolVFoq7VQQxzQTpxAxLDAOZPjlA5oDPg
BrGnPiBfZieWczCXmo6DEr00KLxHb4ep350uz9ZnPGP1HT.iF2Y8MLNjjdFb
2lwg.IG8wgHKmxv5UtNcJunmnhxXyLESH7Mjw1fGDC562UrDoXRoJ8UH0DVF
dhydf+vpBv2dC7Ce22dC5c0bB992U+M76pMfn+.76OHxgedhW5w1EPLgcAm3
X2UBBEHZWYTDDJAEpuECFjMZXd0OTBSQVTsI5n.klsLYkvhxO..6peGjH8zv
PzyOhGJYrXTEI9SWMrzPEJTBRTizxn3QATICTE1WfREhx7HeniiQ9LKu7Kck
rfNbtJUGlYDkfip.UAztGnRiVP1yGmhDc1l7SS.kfnOe1OB87ElPL6goSrrd
8rjC9pvPpIaxL4RB9rrlC8qLf3HkoFVBo0A6MYpAGKrFf8DlvQgV3Q5OVBbg
5KLYynavimzfSXcEnHnPI1DZAGTb3HAn5KHQrnhn3fQAHsZQbYdgXDfS5b8W
ZpQEUFiJDZzlhc1NtFMLoopadfM3wAGze3Kcmntbd1Ig1DHGetBRpoP0NPBc
NOWyMy6fEy0LFd1FYROruFzxbjI+wSE759TMzfTLhxAz3rwGcNL.YsQSfESS
JJ3bc7wsMkoOenKMaAhQeRD87MzkFlXVTsX33X7LyimUaQ0iwHqWYE54oIvH
TEbJzOiMCZo+tW5wrDXLPN7DiBkXwhrNouqFglAyYtTm81URHbZu8k4EuVMf
P+fHFiolhSnMPTvKFh3Ewk7qeUApOmTMY08IYyyuuygl09VHI0.nenpQVZwG
iYlavAyxuU22.jqwHhuue.l4c7hyH.gaWNIkWt.T9XV0hGAeR9TA.mr8ueZd
Q78WmlOmCjKRvC2u4QOK4x.UmMhgFKjBDez5DuZ065Z1PEIOrOn5xP4Ef7pE
szjd9FlRce0J.QMYwDicVFz+Du8nDlgyDXPOJPttjA502AyX9N1nxxRfZp.X
a7ckK8Mw5Oq6Misp4EYpIZGxLZvQO58J6qCBVZv2z3ZIR4ahT9lF4adRr7Yz
oAJjqkbswTeMfvjcRJ1ZsVFIFPhtReJPxlom3EQUVXDz75e3v3Et+3UyOUx9
qB38Wo7phb.B7o3YUI2w69bH2fgrcak3VyRdzRR5fVUuez3I93NLRG1KlF+F
dgGa9DxeK4Kqhm7l0Ppzh8uO1UlutXVy8T+BT.rUqmyKqyJt48eversT.6bR
KRlOmmsav44IkhZxIgrC+PrKhCzFwA5DwIzBzA4NzIvBwQL8pNRbX1X67FSb
vNSbn13Z8b1NoIy0ijTGGW1kTPeUIgoZJD6skZPiDJTdl6MZ6irxAsP4PcP4
Tc3jlQDymoT0vgQArwZQzbwtxZgZg7H5KUiQ9dBhFFHfPhxh.qx5SFJKBqT.
xKPAzMiyfo.DarHnNK2CEai7PbmEJxF4wgwWg197xQxiuMjEPty9wJ4A5L4g
XS9Pp6nKXm7fbm7DdL3ubXBCApY1Agoa1Z.hfRrhep6.T1..nDHRAgr8.T1v
.n1jSE6PD0lTjXG5xXSJRBrabNnPr7kKl9MMASQdRr0P7D1FM.2SM.qadYxl
sFBM.ZqMZGz.huzuhnBag0c.yPE1x2VqH2XUisIwzSD5AUdBs0quKOgwpkyh
1FU0cS6uEgs6p5Wt0.7zGaURqP2g11j0B6NZsXqxA4tRxgw1Rq0Q3CxVZ+NR
df1VHAGIO91VW.2HOHahthcWzUTvwuLgh0pJ0UkBzJE.5NG.jUz9cW.BjMAP
ehU4qt7.cWBOjMAPidaINtqJEHnknC7si3vbm3XSxEnCMdrXVQHNUZd6TNFz
dyFT+jlmjmKHRNzVppS1o6r0.jmCZU8BcmsFLz1RG3H4gb7IxnotDEsCOlng
oNFPhkynYWFjqp6.0SPKsoMnGlhEBswcmD0KljH0K+D0zwI230RAvuHEPWWo
WWMH3EoApt38IaoKPFd6aBpWKsC4tQVBsgKBzcoag9uoXNZC5z2QkoZ6r3Uq
tiWTpujRAwKM9y4xddK38xMSxTaJuhdE76RZNeY6J5EWLaQREeV05BUSu8Pn
pc67DsKWQ15D8+8DUem+mK+W.QKGM7A
-----------end_max5_patcher-----------

From the Middle Outwards

From the Middle Outwards is a live performance based on the theme of discovery by Amber Jones, Joe Mallonee, Jack Taylor, and Alexander Panos.

Amber Jones performed with a violin and also did sound editing.  In addition to sound editing, Amber worked with Jack to compose a melody to play on top of Alexander’s background track.

Joe Mallonee created and performed the live visuals, wrote the lyrics, and gave feedback.

Jack Taylor performed the guitar and lyrics and also created the main motif that was played throughout the performance.

Alexander Panos created the backing track, edited sounds, and mapped out the overall flow of the performance.

The piece was composed in 7 parts with layers gradually added. Breakdown sections were scattered in until the 7th section where the totality of sound is distorted, builds in intensity, and ends.

10609487_804326906308933_8501701729125787282_n

11045397_804327356308888_2686701757653239674_o11050799_804326909642266_8805263627588738111_n

11103796_804308776310746_2104395421_o

11114142_804326912975599_4683654983448717436_o

11131065_804308779644079_1167451389_o

Amber hooked up contact microphones to the bridge of her violin and to the sound post of Jack’s guitar. These inputs were run into Ableton Live where Amber placed distortion effects on Jack’s guitar and an EQ, reverb, and a grain delay filter on her violin.

11086390_811675752260501_850437514_o

11096933_811675762260500_1626706169_o

11101689_811675748927168_2120484517_o

Alexander Panos created and edited recordings along with presets in FL Studio in order to achieve the backing track and connect it with the theme.

Nature Sounds Interacted

Team Members: Yury, Zhiwan, Brittany, Gwen

Our performance was based off of using field recordings of nature (animals, trees, etc.) combined with interactive visuals and a live performance of poetry. The goal was to create a free-flowing, avant-garde performance that was never the same any two times performed.

Roles:

Yury Merman: I found a myriad of high-quality nature sounds online (mainly soundcloud), and chopped up the samples. I processed many sounds as well, either with electronic/synthetic sounding effects, or with mixing effects such as EQ and filtering.

I used Logic Pro for the sound design and editing, and then used Ableton to perform and trigger the sounds.

I had various nature sounds on differing tracks and they would play at random on their own. I could also control the triggering of the sounds, switching up parts sort of like a DJ/producer would do so with a standard electronic track.

I had an outline of the audio structure where it would start slow, build up, have many differing sounds, and then vary throughout until the end of the performance, during which the sounds became more minimal.

Ableton was also connected to MaxforLive, which allowed the audio to trigger effects on the visuals we displayed during the performance. Based off things such as frequencies and intensity, the visuals would change in ways such as frame-rate and color/filters.

 

Zhiwan: Created a Max patch that would allow the audio to trigger the visuals. He also had control all the visuals manually, changing them in real-time during the performance as well.

 

Brittany: Provided video that we used for the visuals, and also performed poetry, samples of which I’ve provided below:

a blindmans eyebrows
condensing the autumn fog
into beads of light
squeezing his eyes shut,
the cat yawns as if about
to eat the spring world.
black winter hills
nibbling the sinking sun
with stark stumpy teeth.
All the haikus are by Richard Wright, an american poet, who wrote these during his last months of life.
Gwen: Also provided visuals and performed the last poem as well. She also lit matches, which gave our performance a more primal aesthetic (Fire = nature).

 

 

 

 

 

 

 

Deux regards perdus vers l’horizon

Deux regards perdus vers l’horizon is a live performance for amplified cello, amplified sitar, and electronics performed by Jake Bernsten, Jean-Patrick Besingrand, Caitlin Quinlan, and Kristian Tchetechko.

After a quick brainstorming session, the idea for a piece mixing instruments from different traditions (classical, indian, electronic) came naturally. The poetic idea of the piece was the start of its composition. After few sketches, we set up a fixed formal structure which allowed us to improvise inside of it. Proceeding this way made the performance more coherent. The formal structure is close to a perfect arch form. The material is derived from the night-time raga Yaman Kalyan, with the note C sharp as a polar reference. After a first section based on this raga, a noisy element is introduced little by little, leading to the middle section based on noisy sounds. After this section, the raga reappears progressively.

 

First beautifully handwritten sketch by Caitlin
First beautifully handwritten sketch by Caitlin

 

IMG_1440
Final score

 

The sitar and the cello are both amplified and proceed into a Max patch. This patch, conceived by Kristian, includes a sample recorder and shuffler for the cello and a randomized pitch delay for the sitar. Both instruments benefit from an strong reverberation.

Jake, the central element of the piece, controls some samples in Ableton through a Midi keyboard allowing him to interact with multiple parameters of sound.

An important element of the performance resides in the visual element coming from the Max patch. The acoustic instruments as well as the samples controlled by Jake are connected into an x-y matrix that visually represents the changing stereo field. This visual element is a concrete representation of the poetic idea of the piece.

Program note:

Deux regards perdus vers l’horizon represents the perturbations lived by two people who little by little take their distance from each other and find themselves back together. This piece depicts different moments of this process. The acoustic instruments are the representation of these two people. The electronic part symbolizes the main interest in common of these two people, which is the basis of the relationship and the basis of the piece.

Video of the performance edited by Kristian:

 

 

The Internet Aesthetic

Team Members: Amanda Marano, Chelsea Lane, Mutian Fu, Jaime Dickerson

Our performance was based on the performance that Jesse showed us in class using the MIDI controller, as well as other electronic pop artists we listened to online, such as Madeon. Our goal was to create a fun upbeat dance piece that was unique every time it was played.

Tools/Roles:

We used three different controllers during our performance. Jaime used a MIDI keyboard of her own connected directly to the mixing board and was outputted to the speakers. The effects were generated by different settings on her keyboard, as well as a voice modifier and a microphone attached to the keyboard. Jaime played this keyboard during the final performance.

Matty was in charge of a MIDI controller with knobs that was connected to a MAX patch that directly controlled the video that she edited of My Keepon dancing from YouTube videos. There were three knobs that controlled the different RGB color levels in the video. If the levels were all set to 1, the video would be restored to its original color levels. She used this board during the final presentation.

Max-patch

IMG_3287

 

This is the original version of the video Matty edited that was used in the Max patch:

Chelsea and Matty created an Ableton Live file that included sound bytes the group got from SoundCloud, including dubstep drum beats and latin rhythms. Matty mapped each different group of sounds onto a different page of a third midi controller, along with different sound effects. One sound can be played from each group, or page, and to that sound any or all effects on that page can be activated or deactivated for that sound.

Screen Shot 2015-03-24 at 11.49.46 PM

Chelsea wrote a rough composition for using this Ableton Live file during performance. Amanda performed using this board in the final presentation, and she used the starting sounds from the composition but deviated from there until the end of the piece.

The composition was of the format of page-clipNumber and initially looked like this:

Intro:

10-1

9-3 (Add soon after 10-1)

8-1 (Add only once music builds in intensity)

We rehearsed with Chelsea using the Ableton MIDI controller before we added the audio effects, and with Amanda using the controller afterwards for the final presentation. Each run through was ultimately completely improvised, with both Jaime and Matty reacting and playing in response to what either Chelsea or Amanda was playing using the Ableton sound clips and effects.

Here’s the final presentation:

 

 

Buried by Shlohmo (Maya and Clair

Maya Kaisth and Clair Chin

The concept for this video started from the idea of creating a visual representation of motion within music. We envisioned using the body to create abstractions that would respond to music, and decided to make a piece that we could potentially play in relation to music. In the future, we hope to take the footage we created and relate it to different pieces in projected live performance.

All hell breaks loose

Final Video: https://www.youtube.com/watch?v=4cSU0eZ_1-8

Performers

Chung Wan Choi: Composition, Frame Drum

Tyler Harper: Bassoon, Bassoon Reed

Julia Wong: Face

 

Technical

John Mars: Face Tracking App, Lighting

Steve Chab: Audio Engineer

 

Chung, Tyler, John, Steve and Julia was a group with a unique mix of skills and interests. Since most of us play an instrument in one way or another, our initial ideas started with the concept of juxtaposing and contrasting live acoustic performance with electronic sound effects. Originally we wanted to make a piece where contact mics on cardboard or random objects trigger MIDI instruments in Ableton Live. Then John came up with the idea of controlling Live effects with OSC data triggered from a facial recognition app. So in the end we decided to have Tyler on the bassoon, Chung improvising on the hand drum and Julia responding to both performances and controlling the Live effects by making faces in front of the face-tracking app.

We decided to have Tyler play the solo without any effects the first time, with the lights only on his face. After the lights went off in the first round, Chung and Julia came in with their improvisations. Tyler switched to using his bassoon reed to create loud cacophonous sounds. This is when we all went wild and all hell broke loose.

Since Tyler loves octatonic scales on the contrabassoon, Chung composed a short solo for it. Here’s the score and the audio track she exported from Finale.

Score: Contra

Audio track exported from Finale:

 

Steve set up all the instruments and effects in Ableton Live, then used Livegrabber to retrieve OSC data from John’s facial recognition app.

Notes & Peaks: makes MIDI notes from audio peaks

Live plugins: vocoder, grain delay, reverb, limiters

Max for Live plugins: threshold, notes

Screenshots:

1

2

3

4

Screen Shot 2015-04-01 at 8.50.14 am

Screen Shot 2015-04-01 at 8.50.21 am

John was the programmer for our face-detection app. Facestrument is an app for OSX (heavily based on kylemcdonald/ofxFaceTracker and FaceOSC) that outputs OSC data according to facial gestures as seen by a webcam. The app tracks a single face with computer vision, analyzes it, applies a mesh, then sends out data for each feature on the face, for example the width of the mouth, or the amount of nostril flaring. We also mapped the data so that the values are all between 0 and 1, such that it is easier to control the Live effects. The face is represented on screen as a caricature with a simple bi-colored, thick-lined outline.

This is what we see:

(gif – click on it)

faceDetection face3 face2 face1

 

Here’s Steve’s test on using audio peaks and turn them into MIDI notes. He hooked those notes to a drum machine and played a beat on his forehead with guitar plugs

Audio peaks/threshold to MIDI note test

Before we got the face detection app to work, we were trying it out on TouchOSC.

Bassoon controlled by TouchOSC on Steve’s phone

IMG_1618

IMG_1617

IMG_1616

IMG_1610