This is [Not] a Line

 

Abstract: This Is Not a Line is a web application consisting of a collection of unique interactive audio-visual experiences. The application employs real-time webcam stream to approximate the user’s motion: the computed (x, y) coordinates act as the input of all algorithmic audio-visual processes, which run exclusively on the client side. The application has been written in JavaScript, and has been developed for the the Chrome browser: it was released in 2018, reaching approximately 200 unique users. Currently, This Is Not a Line consists of three unique experiences and can be perused at www.thisisnotaline.com . Future implementations have been planned.

Classification: Experience Design, Interaction Design, Front-End Web Development,  JavaScript Programming

 
 
 
 

 

Context and Motivations

The initial motivation of the work came from a technical standpoint, specifically to explore the possibilities that the Web Audio API and Processing.js, coupled with the state of the art of present-day browsers, would enable me in a development context: I had previously worked with motion-tracking via integrated sensor systems and the idea to move it purely to software, and specifically to the web, was a natural (and exciting) one.

When it came to the design, the goal was to create a seamless experience for the user, at once intuitive and immersive. My main design objectives can be summarized as follows:

1. User motion must be the only catalyst of all processes, guiding all of the compositional elements of the experience - auditory and visual.

2. The user must feel he is the co-creator of his experience and be presented with ‘enough’ level of complexity (heuristically speaking) to be engaged while exploring with the interaction.

3. Each experience must be unique, with regards to the audio-visual generative processes involved, yet consistent as a whole.

User Experience

Each of the three current experiences consist simply of a white page to be ‘filled in’ interactively by the generative lines based on the user’s motion. The absence of any other visual element to interact with makes it unequivocal for the user to completely be immersed in the generated graphics. The user is the interface.

The visual algorithms were constructed in a way to ‘prime’ the directional spatial exploration and to give a (visual) guide of the associated auditory controls with the resulting motion. I’ll illustrate with a case-by-case design breakdown.

In the first experience the detected (x, y) coordinates trigger the generation of both vertical and horizontal lines. The user is naturally guided to explore vertical and horizontal motions. The associated auditory controls have therefore been chosen to map the movements in these two directions. Specifically, an increase in the y coordinate is associated with an increase in the total auditory constituents. With regards to the x-coordinate, a proximity to the left and right side of the screen has been mapped to an addition of constituents respectively in the left and right audio channels.

The second experience presents the user the opportunity to draw polygonal shapes. No particular direction is primed and the user is naturally drawn to spatially explore explore. A collection of auditory constituents placed in a grid have been mapped to be triggered on-and-off with regards to the detected position. Free auditory exploration follows a free visual one.

 

In the third experience, lines are drawn to form a ‘focal point’ centered around the detected position. The user is naturally drawn to explore the radial distance from the center of the screen, so that auditory constituents have been mapped to be affected by the distance from such point.

 

Program Structure

The following technical briefing covers the most important points in the development of the application.

The program has been written in JavaScript. I have used Processing.js (the JavaScript API of the visual programming language Processing) for all the handling of the visuals and the Web Audio API for all audio.

The webcam stream is accessed via the getUserMedia specification and handled within the Processing context. User’s motion is computed as follows: each two consecutive frames are stored into arrays - PImage in Processing lingo. For each pixel, if the variation of total color amount, taken as the sum of total RGB values, is greater than a specified threshold, the point is treated as a 'moving' pixel. Moving pixels are drawn as black points into a new image-array: the average in both the x and y components of all such points is calculated to obtain the approximated ( x, y ) coordinate of the movement.

This Is Not a Line slide1.jpg

The (x,y) coordinate are treated handled both within the processing context, to draw the generative visuals, as well as treated as global variables, called externally by an audio context.

The visual algorithms, written in the Processing language, are quite trivial in construction. The main design aspects are: (i) the make use “line elements” (ii) they draw 30 consecutive of such lines at a time, sequentially and stored in an array (iii) each has a randomized alpha channel, for variety in intensity. The other design aspects of the algorithms can be trivially inferred by their actual visual appearance - showcased in the previous section.

An audio context is created via the Web Audio API and multiple .mp3 files are loaded into buffers. All the files are played synchronously at page load and looped; playback controls are controlled by functions from the API affecting their volume. The (x, y) coordinates affect unique overall 'volume functions', affecting the playback of the collection of audio files in different ways. The volume functions for each experience follow the design outlined in the previous section.

A more in-depth presentation can be made available to interested parties: the HTML/CSS/JavaScript code has not been 'uglified' and can therefore be perused in its entirety on any web browser.

Future Implementations

The project took about a month in conceptualization, research and design, and about a month of development. 

Future planned implementations may include better motion-tracking algorithms - motivated by the recent web-based JavaScript motion tracking algorithms that make use of machine-learning techniques. 

The overall architecture for better program execution can also be improved. New chapters have been sketched - with regards to both visual and audio algorithms.