content

GETTING SUBSUMED IN THE SPACE

A report on what actually happened during Transient-II at net.congestion

by Tim Boykett


For those that came in late: net.congestion was a festival put on by various interesting people clustered around the r a d i o q u a l i a memeset, over three days in Amsterdam. I am sure there are other reports in this volume about the event. Saturday night was streaming performance night -- many performances were made, attempted, hacked or otherwise undertaken dealing with streaming and various side issues. Time's Up with a crew of international co-collaborators presented some work based upon their ideas of "Closing the Loop". Those in the space heard a sound start near the stage and shift slowly to the left along a four-speaker system forming a square in the space. After the sound had moved this far, a second sound started near the stage -- it was similar to, but not the same as the first sound. This moved off at the same rate as the first sound, which was continuing its movement. When the first sound was two thirds of its way around the room, the second sound was one third of its way and a third sound started up near the stage. Once again, this was similar to but different than the previous sound. These three sources continued to rotate in the space for twenty-five minutes or so, then faded to a scratchy nothingness. On a chat screen near the back of the stage, the words "what's going on" appeared -- this text is an attempt to reply (a little late) to that question, and hopefully to get some more collaborators into this little lab.

For years, people have been streaming sound here and there. The ideas of taking streams from elsewhere, manipulating them, playing with or over them, then passing them onwards have recurred with startling regularity. Unfortunately, these projects have succumbed to the weight of history by leaving little documentation for their existence other than happy musical memories or hazy scattered images of technological breakdown.

The presentation of these streamed performances, often very much based upon the iterative manipulation of the streams, strange improvisations and reacting to strange network effects, has also been problematic. The people involved know that what they're getting is coming from a long way away, but for the local listeners/viewers there is often no apparent difference between what they would hear this way as opposed to what they would hear if some people sent a few battered cassettes of their electronica backwards and forwards before the gig and jammed with them. At least the chance of technological breakdown would be a little less!

The performance at net.congestion marks the end of a sequence of work called Closing the Loop 2000 (CTL2000) which started from discussions between Times Up and r a d i o q u a l i a in September 1999. The chosen field for this program of research was network collaboration, a phrase meant to help us move from a more solid existing background in network sound collaborations into the possibly hairier yet enticing realms of more intricate networking collaborations -- visuals and spatial work. The two heads of the problems indicated above can be labelled as "what has been/is being done" and "how
do we present this work to the less that 100% technologically aware?" These are probably the two most important questions in any area of research: avoiding repetition and duplication of effort and explaining yourself both to colleagues and the uninvolved.

I do not wish to touch upon the first question here. Suffice to say that there are more projects and products and far less analysis of these things than any of us had imagined. We are commencing our documentation and would love to hear from people with comparative writings or opinions of various kinds about products and procedures for network collaboration, whether it be long distance game playing, Rocket-like products, streaming networks or such things as the so-called "midi-over-IP" protocols.

The presentation issue is possibly the harder one -- at least it's the more interesting one! What can we do to get the idea of what it is that's going on in these collaborations and move it into the public realm?

A simple solution, and as I understand it the more common one, is to take various streams in, to mix them with locally produced sound and to whack the whole lot onto two channels of a mixing deck for the front of house. Sounds like rock and roll to me. The mixer gets to stand on a stage, fiddle knobs and make some cool soundscapes with what comes in. Optimally, there is a screen above the stage where all sorts of vaguely relevant or possibly totally irrelevant pictorial information is projected, possibly somehow synched with the sound, probably through the deft motions of the video mixer's fingers.

If there is a higher degree of collaboration going on, the streams coming into the presentation venue will be matched with at least one stream out, so the contributors can hear what each other are doing, or more importantly what's happening in the main venue.

What we attempted here, and this was helped by the (technical) inability to get a stream out of the venue, was to try to work out what the functionality of the various parts involved in the presentation were. And to keep to the metaphors that then resulted.

As might be guessed from the title of the laboratory, Closing the Loop, we're interested in looping, in iterative processes. Thus, the topology of the streaming was to be a loop. The nodes were collaborators at various points on the net, they were taking in a stream from one other node, manipulating it in various ways (though we hoped not too much, so as not to render it unrecognisable) and then streaming it on to a server. We arranged for the nodes to take the streams from each other so as to form a loop. Since there was no node at the venue (we were unable to stream out) the nodes were all somehow "equal" in the loop as seen from the vantage point of the presentation venue.

The most striking thing about a loop of streaming audio is that there is an intrinsic delay built into the encoding and decoding process. Thus over the whole loop there is a delay -- in our case it was 58 seconds -- which will remain constant (due to details of the RealAudio encoding method -- I think this is true for all encoding technologies). Note that this has to be measured for all new set-ups -- observations for a single encode-decode delay range from 7 seconds to over 60. This can be viewed as a series of delay paths, or from the vantage point of one of the nodes, as one huge delay path. If we invert this view, we can see the buffered sound as an object that's looped, with a node moving along this object, receiving the audio encoded in it, manipulating and massaging it and placing it back in the object. Somehow this model allows us to see the process that is taking place more clearly.

The elements of this process are the nodes, and the looped sound object that lies in the encode-decode buffers of the streams. The classical approach is to look at the nodes, the active players, as the relevant objects. Thus one should take their streams and somehow present them together in the space; mix and match. Or, dislocate them from one another and present them separated. But as we can see from the discussion above, we might take the buffered sound object to the focus of our observation. This exists as a loop. If we were to lay this loop in a room, we might somehow be able to observe it in its entirety. The loop has several access points -- the streams are somewhere on the loop at a given time. Thus we chose the following as the metaphor -- the loop lies along a loop of speakers, ways that we have of listening in. The streams are decoded in the performance venue and are fed into a matrix mixer, a machine that under computer control can route sound from any input to any output. We programmed this to pan the streams from one speaker to the next around the loop, such that the time taken for one circulation matches the total delay time. Adjusting the exact relationship between the positions of the streams so that the sound loop object was correctly rendered was more difficult and was not completely satisfactory. That will be a challenge for next time!

Of course, such a spatialisation of the network of collaboration is but one partial solution among many. Spatialisation of such streamed collaborations allows us to differentiate and observe what's actually happening. Except that the listeners were not really aware of the details of what they were hearing. Visual creatures that we are, we like to see the guitarists spread their legs and rock into their solo -- not stand motionless as waves of sound crash around us. The frontperson explanatory power/visual focus issue arises here too. Where was the stage here? How do we translate some of these metaphors?

For all intents and purposes, co-ordination among the collaborators occurred in a chat. This might be translatable as the stage, or better as the un-miked stage, where the trombone player can swear at the drummer. But this is a necessary part of the staged experience. The miked stage, where the voices come across to the audience (whether intended to or not is another question) needs something else. We attempted to use another chatspace, called fogchat, which allows two dimensional placement and uses colours to differentiate chatters. We were able to begin using this for the discussion. This will be a further development.

Damn! Too many future developments! But not to worry -- there are a lot of interested and interesting people out there who are into these things and getting on to these problems, so we can look forward to many developments in the next while. This is one place where the space of acoustic, the net and physicality cross-over and this makes for an interesting place.

Thanks to the contributors on the night:
Perth: Jeremy Hicks and Malcolm Riddoch (Enargeia), working from the Imago space.
Linz: Martin Greunz and Gerd Trautner (Time's Up and Yuri) in the Time's Up harbourside laboratories.
Los Angeles: Joachin Gossman (CalArts) at the CalArts space.

At the venue, Jesse Gilbert (CalArts) and Tim Boykett (Times Up) battled with the streams and the matrix mixer.

References:

http://www.timesup.org/closing
http://net.congestion.org/
http://shoko.calarts.edu/~jesse/fogapplet/fog.html
http://www.radioqualia.va.com.au/ctl

-------- ------------------------------------
\ / Tim Boykett mailto:tim@timesup.org
\ / TIME'S UP
\/ Industriezeile 33 B
/\ A-4020 Linz
/xx\ ph/fax:+43/732-787804
/xxxx\ http://www.timesup.org
-------- ------------------------------------