Test pattern is a system that converts any type of data (text, sounds, photos and movies) into barcode patterns and binary patterns of 0s and 1s. Through its application, the project aims to examine the relationship between critical points of device performance and the threshold of human perception. In this first edition of the project, an audiovisual installation, test pattern involves a sequence of tests for machines and humans, comprising visual patterns converted and generated from sound waveforms in real–time. The installation comprises 8 computer monitors and 16 loudspeakers aligned on the floor in a dark space. The 8 rectangular surfaces of the screens flicker intensely with black and white images, floating and convulsing in the darkness. 16–channel sound signals are mapped as a grid matrix, passing and slicing the space sharply. Via a real–time computer program, the signal patterns are converted into 8 barcode patterns, which are tightly synchronized. The velocity of the moving images is ultra–fast, some hundreds of frames per second at certain points, providing a performance test for the devices and a response test for visitors’ perceptions.
See more on Ryoji Ikeda’s website: http://www.ryojiikeda.com
performer’s voice is processed in eight independent channels and feeded to the speakers, movement of each speaker is directly connected with frequency and amplitude of the generated sound.
I stumbled upon another good example of drawing audio waves, similar to the example we did in class. It’s by an artist named David Letellier and the forms he creates are absolutely spectacular. I’m not entirely sure if the all or only parts of the audio track in the background is creating the forms, but the variety of forms is still pretty amazing.
I would also recommend checking out some of his other work, most of which has to do with sound and space (architecture):
So, I just recently stumbled on the “Black MIDI” subculture on youtube, which is essentially a music sub-genre that consists of fast paced anime/gaming style songs with literally millions of notes and instruments playing hundreds of MIDI notes simultaneously on a digital visualization application in a crazy fast borderline seizure inducing frenzy of audio-visual madness.
It’s honestly terrifying but fascinating and I thought I’d share.
This one is a bit more abstract in an audio sense and focuses more on visuals
There’s a million more videos on youtube if you just search “Black MIDI”
Since we were on the topic of ambisonics in class, talking about the different varieties of surround sound and what not, I thought I’d share an interesting project that Ableton did about a year ago in collaboration with a hardware company that makes this enormous room-sized speaker setup and minimalist/house artist Stimming. Pretty neat stuff.
And if you liked what you heard, here’s his entire set mixed down to binaural on Soundcloud:
There’s an ingenious game for kids called Compose Yourself that allows anyone to create beautiful musical compositions with their very own orchestra. Of course, everything’s been pre-recorded, but there’s a (close to) infinite amount of musical compositions you could make.
It works in very much the same way as (and is likely modeled after) John Cage’s Fontana Mix. However, instead of layering multiple transparencies on top of each other and interpreting the result, each transparency card is a discrete compositional element—one that can be played in four orientations (front or back flip and top or bottom rotation). Here’s a video explaining it in more detail:
Fine. Maybe it’s not the most original compositional tool, but for kids who’s never touched a piece of music before? It’s magical! It gets you thinking about the components of music, and how these rhythmical and melodic components different depending on how they are spatially and temporally oriented. It truly is a masterful teaching tool, and a fun one at that. Check out their website here.
Hatsune Miku (初音ミク) is a humanoid persona voiced by a singing synthesizer application, portrayed as a 16-year-old pop star with long turquoise twintails. She was developed by Crypton Future Media, using Yamaha’s Vocaloid 2 and Vocaloid 3, along with her voice being created by taking vocal samples from voice actress Saki Fujita at a controlled pitch and tone. Those different samples all contain a single Japanese or English phone which, when strung together, create full lyrics and phrases and as a result.
To aid in the production of 3D animations, MikuMikuDance was developed as an independent program. The free software allowed a boom in fan-made and derivative characters to be developed, as well as acted as a boost for the promoting of the Vocaloid songs themselves.
In August 2010, over 22,000 original songs had been written for Hatsune Miku. Later reports confirmed that she had 100,000 songs in 2011 to her name. She’s played a number of “live” concerts at venues all over the world.
The promo for the 2015 Youtube Music Awards, made by a group of artists and designers led by Tarik Abdel-Gawad, features video all shot in-camera of near-hologram laser light forms using nothing but the aliasing from the lasers and the camera shutter. What’s so incredible about the work is that Tarik and team devised a way to practically utilize the system’s inherent qualities that most others would find non-ideal and detrimental to the image-making process. To be clear, the forms do not appear to the actors but can be seen and captured in video.