This is [Not] a Line
Context and Motivations
The initial motivation of the work came from a technical standpoint, specifically to explore the possibilities that the Web Audio API and Processing.js, coupled with the state of the art of present-day browsers, would enable me in a development context: I had previously worked with motion-tracking via integrated sensor systems and the idea to move it purely to software, and specifically to the web, was a natural (and exciting) one.
When it came to the design, the goal was to create a seamless experience for the user, at once intuitive and immersive. My main design objectives can be summarized as follows:
1. User motion must be the only catalyst of all processes, guiding all of the compositional elements of the experience - auditory and visual.
2. The user must feel he is the co-creator of his experience and be presented with ‘enough’ level of complexity (heuristically speaking) to be engaged while exploring with the interaction.
3. Each experience must be unique, with regards to the audio-visual generative processes involved, yet consistent as a whole.
Each of the three current experiences consist simply of a white page to be ‘filled in’ interactively by the generative lines based on the user’s motion. The absence of any other visual element to interact with makes it unequivocal for the user to completely be immersed in the generated graphics. The user is the interface.
The visual algorithms were constructed in a way to ‘prime’ the directional spatial exploration and to give a (visual) guide of the associated auditory controls with the resulting motion. I’ll illustrate with a case-by-case design breakdown.
In the first experience the detected (x, y) coordinates trigger the generation of both vertical and horizontal lines. The user is naturally guided to explore vertical and horizontal motions. The associated auditory controls have therefore been chosen to map the movements in these two directions. Specifically, an increase in the y coordinate is associated with an increase in the total auditory constituents. With regards to the x-coordinate, a proximity to the left and right side of the screen has been mapped to an addition of constituents respectively in the left and right audio channels.
The second experience presents the user the opportunity to draw polygonal shapes. No particular direction is primed and the user is naturally drawn to spatially explore explore. A collection of auditory constituents placed in a grid have been mapped to be triggered on-and-off with regards to the detected position. Free auditory exploration follows a free visual one.
In the third experience, lines are drawn to form a ‘focal point’ centered around the detected position. The user is naturally drawn to explore the radial distance from the center of the screen, so that auditory constituents have been mapped to be affected by the distance from such point.
The following technical briefing covers the most important points in the development of the application.
The webcam stream is accessed via the getUserMedia specification and handled within the Processing context. User’s motion is computed as follows: each two consecutive frames are stored into arrays - PImage in Processing lingo. For each pixel, if the variation of total color amount, taken as the sum of total RGB values, is greater than a specified threshold, the point is treated as a 'moving' pixel. Moving pixels are drawn as black points into a new image-array: the average in both the x and y components of all such points is calculated to obtain the approximated ( x, y ) coordinate of the movement.
The (x,y) coordinate are treated handled both within the processing context, to draw the generative visuals, as well as treated as global variables, called externally by an audio context.
The visual algorithms, written in the Processing language, are quite trivial in construction. The main design aspects are: (i) the make use “line elements” (ii) they draw 30 consecutive of such lines at a time, sequentially and stored in an array (iii) each has a randomized alpha channel, for variety in intensity. The other design aspects of the algorithms can be trivially inferred by their actual visual appearance - showcased in the previous section.
An audio context is created via the Web Audio API and multiple .mp3 files are loaded into buffers. All the files are played synchronously at page load and looped; playback controls are controlled by functions from the API affecting their volume. The (x, y) coordinates affect unique overall 'volume functions', affecting the playback of the collection of audio files in different ways. The volume functions for each experience follow the design outlined in the previous section.
The project took about a month in conceptualization, research and design, and about a month of development.
The overall architecture for better program execution can also be improved. New chapters have been sketched - with regards to both visual and audio algorithms.