For our final project, we continued exploring interactions with spatialized audio in hopes of developing new paradigms for thinking about space, sound, and the human body.
Using a Kinect, we developed a system that simultaneously records the sounds and gestures of a user. An individual can start a recording at any point in space, then traverse a path with a specific body part (here, the right hand). This path is recorded in step with the audio, which allows us to trace back the precise location at which a sound was recorded. As soon as the user ends a recording, it begins to loop over both time and space, following the path the recording took. Audio is rendered at the position of another body part (here, again the right hand), which allows the user to “play” space.
In developing this tool, we discovered how important it is to have a frame of reference to “sculpt” spatialized audio. Here, we used a wooden box, on the top of which we created forms. In situations of larger scale, richer environments would likely be needed to keep track of audio.
Questions we hope to continue exploring include:
- How does audition influence proprioception and vice versa?
- How can the realm of traditional media (i.e. sculpture) inform the dimensionalization of audio?
Developed in OpenFrameworks and Max, with the help of Synapse to read Kinect data.
All source code including max patches available online at: https://github.com/bensnell/Soundscaping_v2