Color Interface

Abstract: Color Interface was developed to sonify the total amount and the changes of a user-specified RGB color. The program uses the data stream from the integrated webcam and provides as output realtime soundscapes based on music-generative algorithms. Color Interface was developed for the employment in a performance art project.

 Categorization: Interaction Design, Algorithm Design


Context and Project Goals

The project was commissioned by Alessandro Damiano, an artist who bases his practice around performance art. One of his pieces utilized blue acrylic paint to be poured over a model for an extended period of time, on a large structure covered with white fabric.

The objective of the project was to create a system that would track the color, quantify its amount and apply data sonification algorithms to it. The desired resulting experience for the viewers would, this way, be characterized by a clear one-to-one relationship between its visual and auditory elements. The color itself would thus be perceived to drive all sonic processes.

Interaction Design

The physical process studied during the design process, after which the system has been developed, is the one portrayed in the above video performance. The model in the center of the stage is regarded as the de-facto user of the system, while the acrylic paint acts the medium of interaction. The constituents of the physical space set the physical boundaries of the overall interaction.

As the user physically interacts with the paint, as well as has the freedom of movement within the space, two main modes of interaction exist: purely visual and physical. The two modes, pragmatically overlapping during the course of a performance, are mapped to a single auditory modality.


Design and Technical Briefing

The software was developed with MaxMsp to take care of the color-tracking and CSound for the music-generative algorithms and synthesis. The integrated webcam stream is employed and an RGB value is picked via clicking over a pixel within of the frame. Pixel count, within specified threshold values, is operated and the total tracked value is calculated. Maximum and minimum values have to be set by the user in order to give the lower and upper bounds.


The value, in its percentage form ( tracked / total with respect to specified bounds ), is routed to CSound, which contains the music generative algorithms. A pitch cluster of 24 notes is generated semi-linearly with the increase of the color percentage, with maximum pitch count coinciding with maximum color percentage. Dynamics (volume) for each pitch has been linearly mapped. Sound synthesis is additive based sawtooth and sine waves. Detailed small variations around the pitches (microtonal) are implemented to model the small variations of the tracked color.