Category Archives: Performance

Sonic Paint Brush

This piece evolved out of the work of three School of Music students. “Sonic Paint Brush” explores both the auditory and visual applications of synthesized sound. In a new take on incidental music, the work creates itself through the improvisation and collaboration of all 3 musicians who each have their own “instrument” to augment and utilize.

Role
Alex Panos – Performer, “Waveform” painter
Tyler Harper – Live video, programming
Chung Wan Choi – Performer, percussion

The whole idea of what we were doing was composing visually, not sonically, thus the manipulation of the waveforms was a big aspect of this piece. Using a customizable XY Oscilloscope VST called “Wavecandy”, we were able to display audio signals in a very unique way. Alexander built various sounds in FL Studio using plugins such as NI Massive and IL Harmor that would respond visually more than sonically. Starting with a single sine wave and gradually introducing different frequencies and harmonics, he was able to start creating very beautiful shapes. What’s so interesting is that the images that were being displayed were not just random, they all respond accordingly to different laws of signal processing as well as how sound acts in nature. Using an assortment of effects such as phasers, filters of different types, bitcrushers and downsamplers, unison detune, frequency modulation, and frequency shifting, he was able to morph the sounds into different shapes to continue the progression of the piece.

Screenshot 2015-05-11 13.30.13
Using the environment of Max 8, Tyler created a piece of art that uses sound waves as a paint brush. This software allows a performer to play with the size, shape, and color of preordained objects. As the piece progresses, you can see the transforming shapes reflect the mood of the sound being created by the other two performers. While the piece is created with sound waves, it does not add to the sound of the piece. This allows the objects to become as wild as possible without deterring from the beauty of the rest of the piece.

Screenshot 2015-05-11 13.28.51

Screenshot 2015-05-11 13.29.51

Chung wanted to use her DIY drum. To incorporate it with Ableton Live, she added a piezo element under the drumhead. Signal is transformed into melodic chordal sounds through resonator plug-in which amplifies specific frequencies that will serve as root of a chord. Two audio tracks were setup, with one armed with gate, only sounding when being hit loudly. Hence, these setting provides two different chords selection, while the fundamental were being manipulated through Midi keyboard.

IMG_4183IMG_4182

Continue reading Sonic Paint Brush

€!r€u!+ Br3@k3r$

“Circuit Breakers” created by Amber Jones, Amanda Marano, Caitlin Quinlan and Jack Taylor, is a piece that explores the modern representation of data with an attempt to break it. In our project, we explored the art of datamoshing, circuit bending and vocoding, all while projecting our visuals onto a pile of mirror balls to expand the area that visuals are typically confined in.

FINAL VIDEO:

 


ROLES

Circuit Bending – Amanda Marano

Datamoshing/Visuals – Jack Taylor

Composition – Caitlin Quinlan

Sound Design – Amber Jones


PROCESSES

Circuitbending 

To circuit-bend our children’s toy, we decided to modify it to do a basic pitch-bend, that is, create the ability to dynamically adjust the pitch and speed of the toy in real-time. In order to do that we first found the resistor that controlled the clock speed of the simple printed circuit board inside the toy. A clock in digital logic is a signal that is used to synchronize all of the signals sent throughout a digital circuit, so everything runs in the order it is supposed to and nothing breaks. In order to find this resistor, we opened the back of the toy and located all of the larger components connected with wires (in our case it was two resistors, a capacitor, and the speaker). By changing the clock, either increasing or decreasing the speed, we change the speed of the entire circuit, and with it the pitch of the sounds (low pitch for low speeds, high pitch for high speeds). While playing the instrument, we touched these leads with our fingers (a licked finger creates a short in the circuit) and noted how the sound quality changed with the short circuit. When we located the correct resistor, we snipped it out and soldered in its place a potentiometer. A potentiometer is a variable resistor that is modifiable in real time by turning the dial or knob on the top, either with your fingers or a screwdriver. After drilling a hole in the toy’s chassis and sticking the potentiometer through, we were able to change the speed and pitch of the sounds by spinning the knob while playing notes. During our performance, we used a microphone by the speaker, but didn’t otherwise alter the original sounds in any way. – Amanda

Video of how the circuitbent piano functions:

Datamoshing

For our final project it was my job to create the visuals using a databending technique called datamoshing. Datamoshing is a process that removes certain frames in a video which then allows it to glitch. For the visuals I wanted to play the idea of nature’s relationship to technology. I though that videos of flowers blooming would be interesting to datamosh and would contribute to our overarching theme of glitch. Basically I converted the videos into a different format using ffmpegx then imported those clips into a program called avidemux. With this program I was able to remove the I frames from the clips (these frames record the differences in movement from one frame to the next). I was also able to copy and past frames, resulting in explosions of colors and distorted movement. – Jack

ORIGINAL VIDEO:

Composition

For our project I was interested in using the vocoder. After Amber created the background music I composed a melody that could be sung with the vocoder over it. The toy piano had a kind of funny and sad tone to it when it was played so I paid homage to that in my lyrics and wrote:

“My circuits are bending

I can’t sing in tune

But maybe I can dance”

I imagined the vocoded voice as the voice of the toy piano; they are meant to express the “Frankenstein” like nature of the piano but in a sardonic way. These lyrics are repeated throughout the song. In the live performance I included the singing based on what other elements were involved. – Caitlin

Sound Design 

All of the post-processing effects and instruments used in our performance (besides the raw sound of the toy piano) were created using Ableton Live. Our original idea was to have a fun, upbeat, high-pitched and slightly overwhelming dance track, however, upon the bending of the toy piano, I realized that the sounds were completely different than intended. It turned out that the ‘broken’ sound of the toy piano was more apparent when its clock was slowed down. Paired along with the visuals as well, the song quickly changed from 160 bpm to 48bpm. The song itself was 7 tracks – 4 instrument tracks, a drumkit, the carrier for the vocoder track, and the vocoder. Every instrument rack/preset I made was with the ‘Analog’ instrument. I wanted the sounds to be very obviously and unapologetically digital,  but slightly detuned to make them seem only slightly more organic. All of the sounds were either made with 2 Saw/Square oscillators or Saw/Saw oscillators.

guitar-like chords patch:

Screen Shot 2015-05-10 at 3.41.52 PM

melody:

Screen Shot 2015-05-09 at 3.10.15 PM

a spacey grain delayed sawSquare patch: Screen Shot 2015-05-09 at 3.09.55 PMguitar chords LPF’d & detuned more: 

Screen Shot 2015-05-10 at 3.48.04 PM

bit reduced 808: 

Screen Shot 2015-05-09 at 3.09.40 PM

vocoder carrier: 

Screen Shot 2015-05-09 at 3.09.30 PM

vocoder:

Screen Shot 2015-05-09 at 3.09.16 PM

Amour, suppôts et supplications

Amour, suppôts et supplications is a representation of the different phases in a social and/or amorous relationship that we enter into all throughout our lives. Articulated around different movements connected by combinations of recurring timbres, this piece brings about numerous contrasts and emotions.

The inspiration of the piece comes from Chelsea. It is a universal subject everybody is confronted with at one moment. In order to bring this idea to life, we decided to work in two groups: electronic and acoustic. We agreed on a scale to build all our material: an acoustic mode on B. The separation of acoustic/electronic is a representation of the duality in a couple.

Draft of the introduction
Draft of the introduction

 

Harmonic material
Harmonic material

The first part, only acoustic, represents the time of the meeting, and the birth of emotions. Indecision and mystery are the main elements transcribed in the music by imitation between the instruments, melted in a deep reverberation. The following movements represent the battle for power, the sharing of power, the engagement, and (in our case) the distance between the couple with the presence of a perturbing element.

The formal structure of the piece is fixed. During our last working session, we finally put the different elements together: the electronic part (loops composed by Yury and Kristian using Logic Pro, Abelton and Max), and the acoustic part (Chelsea at the electric harp and Jean-Patrick at the prepared electric guitar). We took into consideration at this time the importance of staging the piece and the inclusion of a theatrical aspect to illustrate our inspiration.

Hairy trio
Hairy trio

 

Harp jail
Harp jail

 

Working hard!
Working hard!

The lightning is focused on the instrument in order to depersonalize the musicians, who are the actors of the performance. This depersonalization is a way to universalize the subject. The last movement of the performance is the illustration of the distance and the moment of doubt in the couple. The presence of Yury and Kristian at the end of the piece symbolizes the external factor which can create tension, doubts, and distance in a couple. The simultaneous presence of the two characters involved in the relationship and the two perturbing elements makes us question ourselves. Where are we going? What is part of the external factor in our happiness? What is really important?

 

Video edited by Chelsea Lane

A Singularity

For my final project, I wanted to find a way to combine many of the field recordings I had made in addition to other, much longer sound recordings and experiments I had done over the semester into one final piece.  These recordings consisted of sounds in my studio (an hour long recording of the radiator banging), moments from a Brief History of Time (mostly recordings of Steven Hawking), recordings from the Bayernhof Museum, and live and altered recordings from the robot project I was a part of (recorded afterwards and independent of the scope of the initial project). I had been playing with these sounds for several weeks in Ableton Live and had become interested in the mood and message created when I combined them together.

While I had initially planned to build another robot to play along with the recordings -(triggered by motion or movement) and held within several sculptural containers, once I started adding the recordings into Ableton Live and setting them to the Alias 8 controller, I realized I wanted to be able to have more control over the sounds. After going over some of the past posts Jesse made related to the Kinect, I recalled that when I first came here, one of the initial projects I had wanted to  create was a body controlled sound piece. I had done some explorations with the Kinect before, but hadn’t used it with Max and was excited about the possibilities with Ableton Live through using Synapse. Unfortunately, Synapse can only be ran with the first generation of Kinect (which I thought I had) – so the computer in the sound lab was showing missing areas in Synapse when I tried to run it. The program that offered another way to use Synapse was this one and seemed fairly easy to get up and running:
. Unfortunately, I had issues with that one as well.

Shifting gears, I instead used the dp.kinect external that I had already installed (and reworked the patch). I then added some of the new vizzie effects in Max 7 along with the subpatch that allows changes in amplitude to alter both the delay and zoom of the visuals. After much trial and error, I set the parameters of the kinect so that the delay allowed for a jump in time and place of the captured body to correspond with the mood and message of the samples and effects I had created in Ableton Live.

singularity ableton live patch

singularity max7 patch screenshot

Version of the final performance/installation:

Hopefully I can replace that with a better version tomorrow.

Good Vibrations

The setting is unremarkable: the beds are made, the lights are on, the bathroom is clean, the floors are swept. The room is empty, except for the invasive electrical wires and the sustained vibrating hum that pervades the space.

The installation took place at a DaysInn. Surface transducer speakers were connected to every object in the room, giving “life” to each object and making visible the ordinary. This is documentation of my final project.

From the Middle Outwards

From the Middle Outwards is a live performance based on the theme of discovery by Amber Jones, Joe Mallonee, Jack Taylor, and Alexander Panos.

Amber Jones performed with a violin and also did sound editing.  In addition to sound editing, Amber worked with Jack to compose a melody to play on top of Alexander’s background track.

Joe Mallonee created and performed the live visuals, wrote the lyrics, and gave feedback.

Jack Taylor performed the guitar and lyrics and also created the main motif that was played throughout the performance.

Alexander Panos created the backing track, edited sounds, and mapped out the overall flow of the performance.

The piece was composed in 7 parts with layers gradually added. Breakdown sections were scattered in until the 7th section where the totality of sound is distorted, builds in intensity, and ends.

10609487_804326906308933_8501701729125787282_n

11045397_804327356308888_2686701757653239674_o11050799_804326909642266_8805263627588738111_n

11103796_804308776310746_2104395421_o

11114142_804326912975599_4683654983448717436_o

11131065_804308779644079_1167451389_o

Amber hooked up contact microphones to the bridge of her violin and to the sound post of Jack’s guitar. These inputs were run into Ableton Live where Amber placed distortion effects on Jack’s guitar and an EQ, reverb, and a grain delay filter on her violin.

11086390_811675752260501_850437514_o

11096933_811675762260500_1626706169_o

11101689_811675748927168_2120484517_o

Alexander Panos created and edited recordings along with presets in FL Studio in order to achieve the backing track and connect it with the theme.

Nature Sounds Interacted

Team Members: Yury, Zhiwan, Brittany, Gwen

Our performance was based off of using field recordings of nature (animals, trees, etc.) combined with interactive visuals and a live performance of poetry. The goal was to create a free-flowing, avant-garde performance that was never the same any two times performed.

Roles:

Yury Merman: I found a myriad of high-quality nature sounds online (mainly soundcloud), and chopped up the samples. I processed many sounds as well, either with electronic/synthetic sounding effects, or with mixing effects such as EQ and filtering.

I used Logic Pro for the sound design and editing, and then used Ableton to perform and trigger the sounds.

I had various nature sounds on differing tracks and they would play at random on their own. I could also control the triggering of the sounds, switching up parts sort of like a DJ/producer would do so with a standard electronic track.

I had an outline of the audio structure where it would start slow, build up, have many differing sounds, and then vary throughout until the end of the performance, during which the sounds became more minimal.

Ableton was also connected to MaxforLive, which allowed the audio to trigger effects on the visuals we displayed during the performance. Based off things such as frequencies and intensity, the visuals would change in ways such as frame-rate and color/filters.

 

Zhiwan: Created a Max patch that would allow the audio to trigger the visuals. He also had control all the visuals manually, changing them in real-time during the performance as well.

 

Brittany: Provided video that we used for the visuals, and also performed poetry, samples of which I’ve provided below:

a blindmans eyebrows
condensing the autumn fog
into beads of light
squeezing his eyes shut,
the cat yawns as if about
to eat the spring world.
black winter hills
nibbling the sinking sun
with stark stumpy teeth.
All the haikus are by Richard Wright, an american poet, who wrote these during his last months of life.
Gwen: Also provided visuals and performed the last poem as well. She also lit matches, which gave our performance a more primal aesthetic (Fire = nature).

 

 

 

 

 

 

 

Deux regards perdus vers l’horizon

Deux regards perdus vers l’horizon is a live performance for amplified cello, amplified sitar, and electronics performed by Jake Bernsten, Jean-Patrick Besingrand, Caitlin Quinlan, and Kristian Tchetechko.

After a quick brainstorming session, the idea for a piece mixing instruments from different traditions (classical, indian, electronic) came naturally. The poetic idea of the piece was the start of its composition. After few sketches, we set up a fixed formal structure which allowed us to improvise inside of it. Proceeding this way made the performance more coherent. The formal structure is close to a perfect arch form. The material is derived from the night-time raga Yaman Kalyan, with the note C sharp as a polar reference. After a first section based on this raga, a noisy element is introduced little by little, leading to the middle section based on noisy sounds. After this section, the raga reappears progressively.

 

First beautifully handwritten sketch by Caitlin
First beautifully handwritten sketch by Caitlin

 

IMG_1440
Final score

 

The sitar and the cello are both amplified and proceed into a Max patch. This patch, conceived by Kristian, includes a sample recorder and shuffler for the cello and a randomized pitch delay for the sitar. Both instruments benefit from an strong reverberation.

Jake, the central element of the piece, controls some samples in Ableton through a Midi keyboard allowing him to interact with multiple parameters of sound.

An important element of the performance resides in the visual element coming from the Max patch. The acoustic instruments as well as the samples controlled by Jake are connected into an x-y matrix that visually represents the changing stereo field. This visual element is a concrete representation of the poetic idea of the piece.

Program note:

Deux regards perdus vers l’horizon represents the perturbations lived by two people who little by little take their distance from each other and find themselves back together. This piece depicts different moments of this process. The acoustic instruments are the representation of these two people. The electronic part symbolizes the main interest in common of these two people, which is the basis of the relationship and the basis of the piece.

Video of the performance edited by Kristian: