HOME

ARTICLES

NET GALLERY (2003 - 2009)

BACK NUMBERS

CONTACT

 

 The Mobile Augmented Soundscape: Defining an Emerged Genre 

 

 Michael Filimowicz 

 

Abstract

This article surveys past practices of designed systems that have addressed the creative production of soundscape, the 'positive' vector of soundscape activity relative to the 'negative' critique of noise, annoyance and environmental degradation. Thirty such systems, ranging from research prototypes to commercial platforms to mobile apps to artworks have been identified. These systems propose a range of acoustic design solutions, defining a general possibility space. I propose the term Mobile Augmented Soundscape as an umbrella category to summarize this collection of systems, in order to crystallize a tradition or genre of practice that can inform and guide new configurations through the development of wearable technologies that may wish to address aspects of acoustic ecology.

Keywords
soundscape, systems design, acoustic ecology, augmented audio, virtual soundscape

 

I. An Emerged Genre

It is likely that in just a couple years time, the term 'wearable' may be in circulation at a rate comparable to the current use of 'mobile' in everyday language, as the terrain of ubiquitous computing begins to release onto the market eyewear, wristbands, jewelry, clothing and other products with embedded sensor, wireless, app-based and similar technologies that will ultimately compete with mobile devices in the same way that mobile has competed with desktops and laptops in recent years. While it is common to refer to new practices as emergent>, in this article I consider what I shall call the mobile augmented soundscape (hereafter MAS) as an emerged genre or even tradition of production, toward the aim of reviewing, summarizing, and perhaps consolidating a range of techniques, systems and concerns for the next wave of wearable soundscape technologies. It is possible to identify approximately thirty functioning systems, beginning in the late '90s, actualized either as research prototypes, artworks, platforms or products, that have concatenated the range of possibilities of developing mobile applications that are concerned with our relationship to, and production of, the soundscape. In this article I will present these works in the form of an annotated bibliography in the next section, which is summarized in Table. The annotations are not always strictly bibliographic, since I consider some projects that have only websites rather than archived papers as reference. Also, a few works are included that have been discussed in the articles presented herein. After the section on annotations I offer a discussion of the discursive terrain and design possibility space that can be identified by the collection of works considered here. The large sample of works available for review and analysis, and the current time of transitioning to new forms of mass-mediated and ubiquitous computational media, makes this an opportune moment to gather together in one place a summary for other researchers and designers a resource for mobile systems and soundscape aesthetics, and their discourses and forms of production.

The systems chosen entail a process of selection and rejection that deserves comment. First, it should be clear that what is under consideration is the creation of actual functioning technological systems, rather than theoretical or scholarly reflection and critique. Secondly, in reviewing the range of systems connecting soundscape to mobile technology, three sets of practices were deemed to be outside the scope: 1) systems that were primarily concerned with 'making music' broadly understood (e.g. generative compositional systems that takes music exclusively as its sonic material, however one may wish to define music, including electro-acoustic music), 2) systems that were primarily oriented to spoken word content, such as platforms for museum and audio tour guides, or for linking oral history to historical sites, and 3) systems designed for specialized users, such as the blind, or children. Thus the works selected connect to a broader range of acoustic ecology concerns, and as will be seen, do not by any means exclude music and voice as sound material. The larger review of literature and works that was undertaken identified numerous projects that would not only dilute the scope and focus of this research, but music-only or vocal-centric systems can well be understood to constitute mobile genres in their own right, and if included here, could well expand the territory to be inclusive of almost any mobile application that includes sound, such as games or personal memo apps. Similarly, systems designed for the sensory impaired, or connected to K-12 learning paradigms, introduce too many disciplinary considerations to be useful here. Instead, I have kept the focus on systems that are interested in the soundscape for general users of mobile devices, and take up various themes of interest to acoustic ecology. That being said, I have included a few edge cases for reasons that I will make clear in the annotations that follow, since strict categorization is not always wise or even possible. Also, in the interest of thoroughness and usefulness to other researchers, I have included a couple websites where the currently available information is not deeply detailed but where there is potential for future updates and contact with other producers in this area.

Finally, I have settled on the term mobile augmented soundscape (MAS) as the umbrella term for this category of system, as it brings together the two dominant discursive fields, acoustic ecology and augmented reality, in relation to mobile technology in the widest sense, and which is potentially transferable to wearable design.

 

II. Annotated Bibliography

1. There to Hear

Thulin, Samuel (2011). "There to Hear: Reimagining Mobile Music and the Soundscape in Montreal". Urban Pop Cultures conference paper, Prague, Czech Republic.
Accessed online:http://www.inter-disciplinary.net/critical-issues/cyber/urban-popcultures/project-archives/1st/session-1-the-city-as-subject-in-music-and-cinema/

Thulin discusses three soundscape interventions according to mode of transportation. Specific routes in Montreal, as taken by walking, bus and metro (subway) are treated through related soundscape compositions. Listeners experience the soundscapes as downloaded mp3 files, and an mp3 player is presumed as the mobile listening device. Thulin counters Michael Bull's assertions about public listening of mp3s through headphones as an inherently solipsistic listening practice, invoking Lefebvre’s concept of 'rhythmanalysis' by assuming 'headphone porosity.' He also takes issues with David Beer’s assertions that headphones cancel out the soundscape. Thulin assumes that headphones do not hermetically seal off the listener from public sound space, but rather takes such leakage as an explicit counterpoint (rather than mere background) to soundscape composition. Listeners to his soundscapes report many moments of confusion between composed and ambient sound. The intent of the mp3 soundscapes is to provoke listeners' connection to place through headphone slippage. Thulin notes that he used only 1% of the source field recordings from these three routes, but that the unused sounds are by no means 'absent' or rejected because they will be part of the soundscape 'slippage' or 'leakage' of the headphones. Listeners are to vacillate in their understanding as to what is actual sound 'out there' and composed sound work. The field recordings that are the sources of the composition are taken from the three specific routes of commuter movement in Montreal, so that the work is as much route specific as site specific.

2. The Missing Voice (Case Study B)

Audio: http://www.artangel.org.uk//projects/1999/the_missing_voice_case_study_b/download_and_walk/audio
Artist Statement: http://www.cardiffmiller.com/artworks/walks/missing_voice.html

The influential sound narrative of Janet Cardiff was originally composed for discman technology. The original audio files are available for download or online audition on Cardiff’s website, so that it can still be experienced on current mobile technology. Cardiff’s narrative traces a specific route through the Spitafields area of Greater London. Listeners are given route instructions in the narrative itself, which embeds vocal sounds and mixes of field recordings specific to the route. Cardiff states on her website, "Sometimes I don't really know what the stories in my walks are about. Mostly they are a response to the location, almost as if the site were a Rorschach test that I am interpreting. For me, The Missing Voice was partly a response to living in a large city like London for a while, reading about its history in quiet libraries, seeing newspaper headlines as I walked by the new stands, overhearing gossip, and being a solitary person lost amongst the masses. Normally, I live in a small town in Canada, so the London experience enhanced the paranoia that I think is common to a lot of people, especially women, as they adjust to a strange city….I saw the woman in the story not only as alienated from her self, but also searching for herself through this voice, play-acting, creating false dangers and love affairs, wanting her story dramatized. At the same time, her voiceover, the one that speaks in the third person, removes her from the story, and keeps her at a safe distance." The recordings are made using binaural techniques.

3. Sonic Geographies

http://proboscis.org.uk/sonicgeographies/
http://socialtapestries.net/

A suite of three related works by the artist group Probiscus takes up socio-cultural geographic themes in several media, including some that do not involve sound. Urban Tapestries uses locative wifi technology to allow users to create content. It explores 'public authoring' and Proboscis has collaborated with various partners from academia, government, civil society and industry to create a range of 'public authoring' works under the heading of Social Tapestries, resulting in an 'anthropology of ourselves' by allowing communal sharing of sounds, images, texts and video. SoundStreams is a series of alternative soundscapes for commuters intended to transform habitual journeys through new 'vicarious' emperiences of sound. TimeStreams is a collection of downloadable ebooks  and short range radio broadcasts that map soundscapes of chosen areas of London, based on one hour time increments..

4. Memoryscapes

http://www.aughty.org/pdf/doing_heritage_differently.pdf
Various examples can be auditioned online at http://www.memoryscape.org.uk

These works of oral history include situated interviews, the voices of the subjects foregrounded against the ambient sound of various locations. Toby Butler describes 'multimedia trails' that are historical walks to be undertaken while listening to mp3s of this oral history. The field recordings of the interviewees typically feature the characteristic soundscape (i.e. the interviewees are speaking at site, not in sound isolation booths or sound studios). The interviews are described as 'location based oral history.' Butler is influenced by Marxist historians and documents working class life around the London Docks, which is currently undergoing typical processes of gentrification. Butler is a member of the Geography Department at Royal Holloway, University of London, which situates this work within a discursive terrain of social and cultural geography.

5. Public Bench Cinema

http://www.betseybiggs.org/project/almostgrand

Betsey Biggs' Almost Grand is a work in which participants download an mp3 file and gather at a particular place 45 minutes before sunset, often as a group, and follow a specified route in New York City, following Williamsburg’s Grand Street toward the East River. The soundscape mix 'cinematizes' the route that is be taken, creating a 'cinematic lull.' The artist writes, "My hope is that Almost Grand heightens the senses and people's connection with the sights and sounds around them, through a soundtrack made primarily with recordings from the area itself."

6. Davos Soundscape

Schacher, Jan C.(2008). "Davos soundscape, a location based interactive composition." NIME08 conference paper, New Interfaces for Musical Expression.
Accessed online: http://www.davosoundscape.ch/NIME08_final.pdf

Created for the Davos Festival, the Davos lakeside promenade is used as a compositional route to define 8 territories for spatial progression through a composed 'open work' (the artists cite Umberto Eco) that blends a mix of electronic and field recorded sounds. Unit is Linux-gumstick custom unit. The artists also take inspiration from the Situationist notion of dérive (Debord) and Deleuze and Guattari's notion of de-territorialization. The artists specify 'semi-open headphones' which allows high levels of transparency to the actual soundscape, which counterpoints the virtual soundscape. Customized units were given out to tourists over a 3 month period. The technology utilizes Linux Gusmstix, Pure Data Anywhere, customized C code, and a GPS unit . GIS data flow is used to generate the electronic sounds, Listeners are given agency and compositional interaction since the score is accomplished through wandering. The Itinerary is the composition. An important aesthetic aspect is to encourage listeners to discover sounds through their own exploration on paths stemming from the lakeside promenade, and also to offer various displacements of field recorded sounds from their original sources.

7. Sonic Interface

http://v2.nl/archive/works/sonic-interface
http://www.turbulence.org/blog/archives/002672.html

Akitsugu Maebayashi’s Sonic Interface is a laptop-based mobile system running Max/MSP patches. Of the works considered here, it is the only one where the live processing of soundscape is intentionally designed to create a high degree of disorientation and confusion, to the extent that it is also the only work that specifies a 2nd user, to act as a personal guide for the listener to assist them in avoiding personal injury in public spaces! Three types of programs are instantiated: 'Growing Delay' delay time implements expanding delay lines, 'Mosaic' fragments sounds, applying cut-up, re-spliced re-mixing techniques. and in 'Overlap' the  sounds pile up, never completely fading away.The aim is to intentionally decouple seeing and hearing, to cause wonder at surrounding sources of sound.

8. Sonic City

http://lalyagaye.com/sonic-city
http://player.vimeo.com/video/39001483?title=0&byline=0&portrait=0

Sonic City, also laptop based, is an interactive music instrument using the city as an interface. Listeners create a real-time 'personal soundscape' of electronic music by moving through urban environments. While in the main a work is closer to 'ubiquitous music,' being primarily a parameterized interactive-generative electronic music system, its inclusion here is based on its live audio processing of ambient urban sounds that are mapped to musicalized parameters (the aesthetic is close to a glitch sound), so that the music in part 'riffs' off the local soundscape, resulting in a "musical duet with the city." This work is also the most ambitious of those considered here in its use of the widest array of worn sensor technology: a mic sensing ambient noise level, a light intensity sensor, metal detector, proximity sensor, and accelerometer map local environmental data in connection to movement through space to audio parameters in Pure Data. Additionally, a user study was conducted primarily with non-expert users.

9. Tactical Sound Garden

http://rhizome.org/editorial/2006/feb/2/tactical-sound-garden

The Tactical Sound Garden (TSG) is a mobile media platform for cultivating virtual public 'sound gardens' within contemporary cities. It draws on the culture of urban community gardening to posit a participatory environment for new spatial practices and social interactions in which users either 'plant' sounds or 'prune' (edit) sounds planted by others. Using a location-aware mobile phone, participants create virtual sound gardens within a positional audio environment. These plantings are mapped onto the coordinates of a physical location by a 3D audio engine common to gaming environments – overlaying a publicly constructed soundscape onto a specific urban space. Wearing headphones connected to the phone, participants drift through virtual sound gardens as they move throughout the city. TSG is a 'toolkit,' an open source software platform, in which the artists intend to explore "gradients of privacy and publicity" in "shaping the sonic topography," creating an associative overlay of virtual sound in relation to actual places. Anyone with a wifi connection can install a virtual sound garden for public use, and any wifi device (PDA, laptop, mobile phone), can stream a real-time audio mix specific to that location. There are two aspects to this work that make it an outlier design for MAS: the locative aspect depends on wifi hotspots rather than GIS, and all of the sounds are to be culled from a pre-existing database of sound files (i.e. an already produced sound effects library of ambient sounds), and so does not feature either generative-interactive (parameterized) or user-generated field recordings typical of other designs.

10. HP’s Mediascape

http://www.hpl.hp.com/downloads/mediascape

"Developed by HP Labs, mscape is a software suite that enables people to design, play and share mediascapes -- location-based experiences, games and tours. The toolkit allows people to associate physical locations with digital media, such as video, music, images and text. Users equipped with a GPS-enabled mobile device running the mscape player can move through the physical world, triggering media in response to physical events such as location, proximity, time and movement. By blending digital media with gaming, storytelling and the outdoors, mscape offers people of all ages a fun new way to experience their surroundings." Mediascape is unique amongst these works in that it is the only design produced by a major commercial manufacturer of hardware and software systems. The website has downloadable pdfs of a User Guide and a guide to Experience Design, and details the many devices that are compatible with this application.

11. Net_Derive

http://www.ataut.net/site/Net-Derive

This work anticipates wearables by embedding two mobile devices and a GPS unit into a scarf, in the creation of 'multimedia place.' One phone takes photos every 30 seconds and continuously inputs audio. The media collected is geo-tagged and uploaded onto a server in a local gallery space. The second phone is an output device, producing a musique concrete style composition of the acquired sound and also processed visuals. Inside the gallery, visualizations and sonifications of the system are projected onto 3D satellite map of the local area, and this 'amalgam' is also streamed back out to the mobile users as part of their dérive experience.

12. MARA (Mobile Augmented Reality Audio)

Härmä, A., Jakka, J., Tikander, M., and Karjalainen, M. (2004). "Augmented Reality Audio for Mobile and Wearable Appliances." Journal of the Audio Engineering Society 52(6), 618-639.

Mobile Augmented Reality Audio system. This paper details a general system design intended for use in a variety of possible application areas. The authors describe what they call a 'pseudoacoustic environment', in which binaural mics embedded into the headphones produce a real-time simulacrum of the local acoustic environment. The aim of the system is to merge virtual and actual sounds. The discourse is explicitly situated along the Mixed Reality spectrum (AR, VR, AVR, MR etc.), with the design intended for either mixed reality or augmented audio reality. The article investigates various psychoacoustic issues (e.g. HRTF, real-time spatial cues, impulse response) and explores the degrees of Indistinguishability between external sounds and virtual events, applying the Turing Test of undecidability, i.e. testing to see if users can tell the difference between actual and virtual sounds in the system’s mix augmented virtual and actual sources. The evaluation tracks users navigating in space for different purposes, exploring possible use cases of navigation aid, museum guide, assistive tools for the visually impaired, games etc. and consider sensor and locative technologies that can be integrated with this system.

13. SWAF (Soundscape Web Application Framework)

Choe, S. and Lee, K. (2011). "SWAF: Towards a Web Application Framework for Compositions and Documentation of Soundscape." Proceedings of the International Conference on New Interfaces for Musical Expression (NIME11), Oslo, Norway.

Soundscape Web Application Framework uses a suite of web and audio technologies: Ajax, Flex, PHP, open APIs, Google Maps, Sound Collider, Audioboo. The system combines soundscape creation, research and documentation goals. Murray Schafer's schema (signal, keynote, soundmark) organizes the compositional dimension of this proposed framework web-based, which has to date produced prototypes testing aspects of the overall SWAF conceptual model. The system has been applied to areas of the Seoul soundscape. This article notes other systems that are primarily web-based (not mobile): UK Soundmap, London Sound Survey, Open Sound New Orleans, Sons de Barcelona. This web-based system becomes mobile through development of apps based on the framework.

14. Jessica Thompson

Thompson, Jessica (2013). "Mobile Sound and Locative Practice." Leonardo Music Journal 23, 14-15. Supplemental files at http://jessicathompson.ca/projects
http://jessicathompson.ca/archives/369

Thompson's artist statement is primarily aesthetic and theoretical in tone, and details as to the system design can be found on the artist’s website. Projects range from use of mini amplifiers and lavalier microphones to amplify the sound of the user’s body through space, to combining MIDI shields, Arduino, and GPS modules to produce locative sound processing, to more performative works such as Swinging Suitcase which produces bird sounds through the interaction of the users swings of a suitcase while walking outdoors. James Meyer’s distinction of the functional vs. the literal site is used to theorize performative aspects of her work. The documentation of Thompson’s work tends to be light on technical details, focusing more on conceptual concerns.

15. Toozla

http://www.augmentedplanet.com/2010/05/explore-audio-augmented-reality-with-toozla

Toozla is a commercial mobile app for creating location-aware audio tags of historic places. It is aimed primarily at tourists who wish to share and upload their own audio content in connection to sites of travel interest, creating place-specific audio channels. Users of the app can subscribe to streams, which may include promotional content. Such content produced by users would likely be rich in the local soundscape, whether as background noise to vocal content, or through users recordings of local places. This user-generated aspect leave the app open to a variety of uses for geo-located sound inclusive of but not limited to soundscape production.

16. Inception- the App

http://inceptiontheapp.com/
All dreams reviewed at http://inceptiontheapp.com/dreams#d4

This app is a 'dream machine' that aims to use sound to transform the users environment into a 'dreamworld,' and is based on Christopher Nolan's film Inception, and includes music scored by Hans Zimmer, the film's composer. The designers consider the work to be a work of augmented audio reality, to be auditioned through iPhone or iPod Touch. Zimmer's goal is to create a different kind of soundtrack from the usual practice of film soundtrack scoring. Rather than the music being a fixed piece –- i.e. downloaded and always the same –- the work should be different on each occasion of listening. Towards this end the app parameterizes music to be responsive to forms of data or input such as weather, time of day, mood, bodily movement, and other forms of data unspecified but that could be obtained through either the iPhone’s sensors or web-based data. The app is organized as thirteen dreams, which can be auditioned. Its inclusion in this list is based on its use of ambient natural field recordings to alter the sense of one's local environment, such playing the sound of thunderstorms on a sunny day, or using the speed of the vehicle one is to either sustain or 'collapse' the dream, or using granular synthesis to re-process in real-time local ambient sound.

17. Mobile Aug Reality Audio System w/ Binaural Mics

Albrecht, R., Lokki, T., and Savioja, L. (2011). "A Mobile Augmented Reality Audio System with Binaural Microphones." Proceedings of Interacting with Sound Workshop: Exploring Context-Aware, Local and Social Audio Applications (IwS ‘11), 7-11.

The system described is the only one found here that explicitly discusses the possibility of bone conduction technology for audition, and uses the distinction of 'mic-through' and 'hear-through' to discuss the possibility of an augmented audio reality system that does not impede natural listening relative to a headphone-based system. The authors decide on a headphone-based system with built-in binaural microphones, partly for reasons of audio quality, while noting that headphones do have the side effect of making one's own voice sound unnatural. The article details a USB-powered system that combines  an Equalizer and Mixer to balance environmental sounds with virtual sounds. The authors take an 'anti-haptic' stance toward AR design (touch draws our attention away from the environment). Much of the article is devoted to psychoacoustic details (ear canal resonances, leakage, notch filters, etc.) and user reactions to the system, which is open to a wide and underspecified range of application areas, are analyzed.

18. Hear and There

Rozier, J., Karahalios, K. and Donath, J. (n.d.). "Here & There: An Augmented Reality System of Linked Audio." Accessed online at http://social.cs.uiuc.edu/people/kkarahal/HearThereICAD.pdf
Full thesis (2000) at http://smg.media.mit.edu/papers/Rozier/Hear%26ThereThesis.pdf

The earliest system under consideration here, being a late-90s MIT Media Lab project that is also the least mobile (most bulky), uses a luggage cart to integrate a GPS receiver and antenna, digital compass, field recording gear, headphones and Palm Pilot. The system, an 'authoring toolkit,' has a web-based component allowing users to create 'audio imprints,' i.e. geotagged web-based sound compositions based on their uploaded field recordings that can be mixed online and auditioned in the environment through web-streaming. Users upload their field recordings at home to create 'audio braids' that can then be experienced as a virtual sound environment by moving through a space. Multiple sound imprints can be linked together to create a path for navigating. The system uses a digital compass to get the heading (facing direction) of a user, a similar feature to Sound Garden (#27, discussed below) which uses a magnetometer to obtain head direction and movement, for purposes of spatializing sound based on head position.

19. Walk With Me

Rijswijk, R and Strijbos, J. (2013). "Sounds in Your Pocket: Composing Live Soundscapes with an App." Leonardo Music Journal 23, 27-29.

Walk with Me is an iPhone app with aleatoric music with a 'new modality' aesthetic (i.e. anti-complexity or 'difficult listening' and toward greater musical accessibility). The music is designed to be layered onto the acoustic environment, and in part utilizes parameterized effect processing that sometimes uses the mic input of the phone for real-time local sound acquisition and alteration. The designers make a comparison of music composition to gaming, since the score is performed by walking through a virtual geo-tagged soundscape where there are GIS-based triggers for sound events, allowing a degree of user interaction in the music production. The designers invite other composers to create music specific to other cities. Listeners have a 'personalized cinematic experience' of the space they are moving through. This notion that altered soundscapes produce film-like events is shared by other works noted above, such as the Inception app and Almost Grand.

20. Mobile Soundscape Mapping

Droumeva, Milena (2010). "Mobile Soundscape Mapping." Canadian Acoustics 38(3), 106-107.

Droumeva’s blog natuarual.com uses blog entry, Soundcloud, Faver Acoustical's dB app and the Recorder app to add a research oriented, qualitative-analytical annotated resource for 'archetypal ontological urban sound environments.' This article surveys other works, sites and technologies such as Soundwalks.org, woices, audioboo, Soundcloud, google maps, audio geotagging with an interest in promoting a 'ground-up aural culture.' What Droumeva finds lacking in other platforms is a component that adds a research dimension for more in-depth analysis of geo-located soundscape.

21. soundwalks.org

Referred to in Droumeva's article above, this site is not currently live, and is included here as an item of possible historical and research interest. According to Droumeva, "Soundwalks is a tool which collects user-uploaded sounds, organizes them according to an acoustic, semantic and social ontology and this is capable of resynthesizing a desired soundwalk."

22. Audio Nomad>

Helyer, N., Woo, D., Veronesi, F. (2009). "The Sonic Nomadic: Exploring Mobile Surround-Sound Interactions." IEEE Multimedia 16(2), 12-15, doi:10.1109/MMUL.2009.38.

The authors discuss a suite of works, some as installations, others involving mobile technology, invoking sound maps, geo-tagging and spatialized audio. The discursive character of the article contains many themes related to social and cultural geography, such as scale, literary inspirations (Laurence Sterne and Frances Yates), oral histories, archival audio and 'geospatial displacement.' The works use a wide variety of sound materials: music, field recordings, oral histories, and geo-tagged sounds. Only one of the works discussed explicitly utilizes mobile technology, and is the only work amongst those here that makes use of surround sound headphones. The technological configuration is only loosely alluded to so it is not possible to reconstruct in detail the mobile system's actual components, signal flow and integration.

23. Super Realistic Environmental Sound Synthesizer

Innami, S. and Kasai, H. (2011). "Super-Realistic Environmental Sound Synthesizer for Location-based Sound Search System." IEEE Transaction on Consumer Electronics 57(4), 1891-1898.

The authors discuss a new method of sound synthesis, related to physical modeling, in which algorithmic models of a variety of soundscapes are used to reconstruct any desired local soundscape. The system can work either as stand-alone apps or web-based server side application, and in theory can offer up for a variety of applications a synthesized soundscape based on the specific characteristics of any input locale. The systems uses several forms of information to compute its synthesis of soundscape actual spatial (GIS) information about places, metadata about sounds, and a database of actual sound samples. Potential use cases are for virtual reality or any audio application intended for public spaces. The authors discuss a new 'clustering algorithm' to achieve more efficient computational processing of soundscapes, and discuss methods by which spatial cues derived from knowledge about the world informs real-time processing, e.g. in the production of reflections, reverberations and localization of sound sources as they encounter actual physical obstacles in the world. As of the time of writing, only certain areas of Tokyo have been modeled for this synthesis method.

24. Generative Soundscape System>

Schirosa, M., Janer J., Kersten S., & Roma G. (2010). "A system for soundscape generation, composition and streaming." Proceedings of XVII CIM - Colloquium of Musical Informatics.

This research discusses a system that would simplify the creation of rich soundscapes for a variety of applications. The authors aim to create a system that quickly and efficiently generates appropriate soundscapes for VR or AR content that otherwise would require painstaking manual construction, e.g. field recording and mixing by an audio engineer. It would be web-based and usable in applications such as tourist feeds, Second Life, social networks, games, and architectural renderings. The idea is to meet high demands of sound design quality with ease of use by making computational server-based use of user generated recordings. The system uses XML, GUI and Sound Collider, and KML, which is a format for Google Map XML data. Graph-based synthesis, sound databases, streaming web interfaces and audio output for multiple listeners are outlined.

25. Viking Ghost Hunt

Paterson, N., Kearney, G., Naliuka, K., Carrigy, T., Haahr, M. and Conway, F. (2013). "Viking Ghost Hunt: creating engaging sound design for location-aware applications." International Journal of Arts and Technology 6(1), 61-82.

The authors describe and evaluate a fully developed prototype of a historical game based on Irish-Viking history, presenting an in-depth study on the effects on user engagement of forms of signal processing, particularly the use of reverb versus sound spatialization, with the former proving (through statistical analysis of qualitative reports) more important for producing higher levels of engagement. The application, developed for Android, is a location-aware mobile game that takes place in "a quiet area" of Dublin. The game utilizes role playing, with each user playing at being a paranormal investigator. The game app's interface contains many sound-motivated elements, such as searching noise spectrums for ghostly electromagnetic vocal spectra, or hearing ghosts that become visualized on the screen. The designers frame the work as a form of Mixed Reality, and there is some desire to occasionally confuse real and virtual sounds, as part of the effects of engagement and immersion under study. The sounds produced are a mix of pre-rendered audio files along with some real-time generative wavelet components.

26. Urban Remix

Freeman, J., DiSalvo, C., Nitsche, M. and Garrett, S. (2011). "Soundscape Composition and Field Recording as a Platform for Collaborative Creativity." Organised Sound 16(3), 272-281.
http://urbanremix.gatech.edu

Urban Remix, developed for iOS and Android, and funded in part from a Google Faculty Research grant, emphasizes community and collaborative engagement and participation in documenting and creating web-based streams of local soundscapes. A mobile phone is used to upload a photo and geo-tagged field recording to a website in order to create soundscape profiles of specific communities. Online tools allow further composing and exporting of soundscapes. Workshops, information sessions and outreach programs are held to introduce communities of users to the application. It is noted that other soundscape applications typically fail to achieve a desired level of sound density in the virtual environment (i.e. enough users uploading suitable amounts of content over wide areas), and so the work stresses the importance of reaching out to specific communities in order to generate sufficient content for a neighborhood. The authors also emphasize the importance of a simple interface that encourages anyone to explore soundscape production and composition.

27. Sound Garden

Vazquez-Alvarez, Y., Oakley, I. and Brewster, S. A. (2012). "Auditory display design for exploration in mobile audio-augmented reality." Personal and Ubiquitous Computing 16(8), 987-999. Originally presented at the Workshop on Multimodal Location Based Techniques for Extreme Navigation at Pervasive 2010, Helsinki, Finland.

A virtual audio sound garden was created in relation to an actual garden in Madeira. Earcons, auditory sounds, some speech audio, and other elements are integrated into a virtual sound garden. Users have a mobile phone, headphones, a magnetometer on the headphones that obtains head position and direction, and an external GPS unit for greater accuracy than is typical of mobile devices. There is an in-depth user study with copious statistical analysis describing the effects of 3D spatialized audio (derived from head facing and position). This spatialization of sound is shown to successfully cause users to slow down in their walk through the garden, taking time to explore the mix of actual and virtual space with more immersion and wonder. The user study finds that the head-mounted magnetometer worked well for provoking interest and immersion, moving beyond triggering sounds through entry into GIS proximity zones by localizing sounds through interactive head movements. The authors also discuss the creation of non-circular proximity zones (i.e. not relying on simple radii from a point in the geospatial informatic aspect) as part of the experience design, so as to better take into account physical features in the landscapes, such as hedges, trees or walls.

28. Audiomobile

http://www.mobilities.ca/portfolio/audio-mobile-2
http://audio-mobile.org/#
http://vimeo.com/30612684

Audiomobile is part of Sonic Zoom research project at Concordia University conducted by Owen Chapman. Online, graphical representations of sound files are depicted on a map of the world, featuring geo-tagged field recordings along with a photo that anyone can upload in either a private or public mode. The project's aim is to encourage greater public interest and use of field recordings as an expressive medium. Uploaded sound files can appear either as points or lines on the website's map, indicating either static fixed source or a moving 'dynamic' traveling path or route.

29. Impress

Thorogood, M. and Pasquier, P. (2013). "Impress: A Machine Learning Approach to Soundscape Affect Classification for a Music Performance Environment." Proceedings of NIME13.
Accessed online at http://www.academia.edu/3189290/Impress_A_Machine_Learning_Approach_to_Soundscape_Affect_Classification_for_a_Music_Performance_Environment

Impress is a mobile app that both ascertains and predicts affect in the audience reception of live soundscape performance situations. It uses a 2-axis affect grid, derived from previous psychometrics research, that algorithmically associates user-feedback and audio signal feature extraction to predict audience affect for improvisors in a real-time performance. The system is evaluated to find a high level of coincidence between actual and predicted affect, though noting that audience members cultural backgrounds and personal associations with sound are not trackable by the processes of audio signal feature extraction that are used. The system is primarily concerned with giving quick visual feedback of likely audience affect to technologically busy and 'in the moment' improvisors to aid with decision-making.

30. Kitefish Labs

http://www.kitefishlabs.com
http://www.terirueb.net/place_names/index.html

"Soundscape composition via GPS tracking for iOS using libpd." SoundScapeTK, an open source toolkit using the library libpd, is a series of classes for iOS and Objective-C. Composers and other users of the toolkit create a special file with the suffix 'gpson' that maps locations to sound files. The current version uses MP3 files, but other formats are intended for development. It's current feature set is listed as: Allows for playback of sounds based on a users location. Uses libpd as its audio engine. While it can be configured to play wav files, it is currently set up to play mp3s, due to the limiting factor of storage space and app bundle size limits. Uses a unique document format (basically, a score) to map locations to sounds and stipulate playback logic. Allows for the playback of sounds while the phone is locked. You can put it in your pocket and still experience the soundscape. Soundscape TK was used in Teri Rueb and Larry Phan's work No Places with Names: A Critical Acoustic Archaeology which premiered at ISEA 2012 in Santa Fe, New Mexico, "GPS-based sound walk and sculpture installation explores the concept of wilderness and its shifting meanings across cultural contexts." A major theme of the work is the concepts of wilderness, as well as 'landscape and language' which aligns this work as much with traditional acoustic ecology concerns as well as cultural geography.

 

III. Notes on Table

Table: http://www.hz-journal.org/n20/filimowicztable.pdf (click here for retrieval)

Most of the column headings of Table 1 are self-explanatory, and here I will clarify the headings to forestall potential confusions that may arise from this schematic summary. In some cases it was not possible to ascertain the desired information from the sources, since the texts were of varying degrees of specificity with regards to usage, technology, technique, creative process and so on. I made a basic distinction between how the sound is auditioned (e.g. discman vs. mobile phone vs. general headphone usage) and the material specificity of the sounds themselves (e.g. real-time synthesis, field recordings, interactive software etc.). Some works were created for specific sites, such as neighborhoods in London or commuter routes in Montreal, whereas other works default to a baseline site specificity through geo-tagging of place-specific sounds. In some cases NA will designate works that may involve neither geotagged sounds nor field recordings that relate to the site of listening. I also wanted to account for the general discursive character of the systems under review by noting any prominent theorists discussed, and the main themes addressed by the work. Under genre identification the aim is to broadly situate the work with similar and related other works that are in some way explicit in the discussion of the works themselves, while in a few cases I was not able to make a determination and so indicated this as NA, since it was not always the concern of authors to so situate their work in relation to genre or a presumed typology of an art or design historical kind.

A few works seemed clearly to lend themselves as being relative outliers to the main line of practice in mobile augmented soundscape, though for different reasons that would not be clear from the table. Sonic Interface is unique in that it is the only work that specifically recommends a second person to help the listener navigate a soundscape of intentionally produced confusions, so as not to harm themselves in public! Tactical Sound Garden is the only work to refer the user to a database of pre-recorded sound files, e.g. from a sound effects library of general ambient sounds. Urban Remix is the only work to consider the fact that platforms of user-generated content are not likely to achieve the level of density of content to be of widespread use and social uptake, and so has built into its paradigm forms of community outreach to create a sufficient amount of online-based sound walks focused on particular neighborhoods where such community gatherings around soundscape production have taken place. Impress is the only mobile technology designed for real-time improvised performance of soundscape composition with the aim of predicting audience affective reception for technologically busy performers. And the iOS Soundscape Toolkit of kitefish labs is the only open source programming utility for advanced coders. Due to the sparse and schematic character of the available information on this toolkit (short of downloading it from GitHub and becoming an expert user of it myself), I have lumped this open source tool together with a specific work that uses it, No Place With Names, which provides further insight into the system's affordances.

Just over half (16 out of 30) of the works involve platforms of user-generated or contributed content, while two thirds (21/30) involved what I call pre-composed soundscape, meaning sounds that have already been formatted for listening as an intact compositional entity of some kind, for the listener. Of these, the majority have been pre-composed by general users, while of these 6 works involve the artist/composer as the source of the pre-composed soundscape.

There are varying kinds of real-time processing that can be found in these projects, which can range from a level of processing related to GIS, e.g. taking a route as processing a sequence and mix of sounds, to familiar engineering techniques of equalization and reverb, to more DSP-intensive forms of spatialization based on head positioning or abstracting effects and synthesis. The ‘-' symbol in the Mobile Phone column indicates that the work was not originally produced for mobile phone, for example as with the Cardiff work, but that the work is currently available for online download and can be auditioned on mobile devices in its current media afterlife. Half (15) of the works make use of the mobile technology microphone input, whether as a source for simple field or voice recordings, or as an input for real-time processing. Also, the number of works that have an integral web application (e.g. for uploading of user content, or playback of sound walks) is also half (15). 20% (6 works) of the projects are content-neutral, being concerned with the development and evaluation of general technical systems that imply a wide range of potential applications. Also, 20% of the works integrate some form of spatialized audio beyond the use of everyday headphones, such as exploring sound localization through tracking the direction that one is looking, or utilizing surround sound headphones, or performing real-time spatial processing based on GIS-derived environmental data, such as sounds that should be reflecting off nearby buildings as indicated by Google Maps.

Finally, about a third (11/30) of the projects include some form of explicit design evaluation, ranging from rigorous statistical procedures to qualitative survey and interview data. This is particularly interesting since it is sometimes claimed in the literature that little evaluation research has been performed in this area. Several authors have commented on a perceived paucity of research in MAS. For example, Vazquez-Alvarez et al write, "Very little previous work has carried out systematic and repeatable user experience evaluations in mobile audio-augmented reality"(p. 11). As shown above, there has been a fair amount of what can be generally described as user testing in a range of systems, and much depends on what one may mean by "systematic and repeatable." It is of course very questionable as to how valuable or necessary such experimental rigour would be given the fact that there are so many differing kinds of systems and the technology landscape shifts as much as can be seen from the summary above. As should be clear from the variety of these systems, extensive, if not expensive, experimental testing is likely going to be very specific to a particular and rare instance of any prototype configuration, which would diffuse the importance for a high level of scientific method in the first place, given the variance of system components. At best, the most rigorous testing is probably best reserved for systems aimed at the widest set of possible users, as is the case with commercial R&D. Thus a review of the differing kinds of evaluation that have been done, inclusive of subjective and qualitative methods, is just as likely to contribute to general design knowledge for MAS as empirical and statistical research designs. Here there is not room for such a review, but I have indicated the sources that are available for a subsequent summary of user evaluations. Similarly to Vazquez-Alvarez et al , Behrendt claims, "overall there has been little research around locative sound"(p.284). This statement highlights the interesting inter-discursive nature of MAS, since other discourse labels such as 'locative media' or 'ubicomp' may unexpectedly restrict research in online archives. As can be seen above, a fair amount of MAS can indeed be also classified as locative media though it may take a fair amount of search term shifting in one's archival efforts to find the relevant systems. Today in most instances the default of digital technology is to already be locative, and one has to dig deep into device settings and preferences in order to disable the locative features! And even then, wifi hotspots and cell phone towers will still create a locative data trail of one kind or another.


IV. Discussion

The works discussed above can be parsed out according to five primary discursive fields: 1) soundscape and acoustic ecology, 2) augmented reality (AR), which is a component of the computer graphics discipline headings of VR (virtual reality), MR (mixed reality), and AVR (augmented virtual reality); 3) for want of a better term, a 'French theory' discourse centered on de Certeau, Debord and Deleuze (psychogeography, dérive, de-territorialization, and 'the practice of everyday life'); 4) social and cultural geography, though in an indirect manner in which these established academic fields are typically not directly named, but where the concern with cartography, scale, social practice and cultural meanings implicitly aligns these works with these discursive domains (this indirectness may be due to the typical institutional situatedness of geography as an applied and empirical discipline, often outside of art, humanities and social science faculty compositions); and 5) empirical systems design and evaluation with an orientation around computer science and/or psychoacoustics.

This clear discursive organization is an especially worthwhile finding in that it can inform interdisciplinary perspectives and research in the mobile soundscape, and perhaps suggests that a reader (a collection of essays culled from these discourse fields) may be a next logical step in this project of delineating the design possibility space of mobile augmented soundscape by furnishing original sources for key concepts and themes. By chance the first essay to be considered points to a discrepancy between dominant humanistic discourses in sound cultural studies (Bull and Beer) and actual practice of creation and design, namely that many systems makers explicitly assume headphone porosity, the leakage and bleedthrough of external soundscape through listening devices, rather than forms of solipsism and social withdrawal argued by sound cultural theorists. Additionally, solipsism is not only countered by using the actual soundscape as counterpoint or source material for real-time processing, but many of the application areas involve degrees of participatory creation, through such activities as community generated documents of neighborhood sounds, location aware gaming, group soundwalks and so on that in fact add new layers of social interaction through mobile listening devices. This suggests that researchers with humanist and social science backgrounds may be well served by reading more widely in the archive, and that anti-positivist inclinations may tend to impede methodologies that aim to be grounded in the concrete material and social practices. In this instance, choosing a focus such as actual systems designed to interact with the soundscape produces a literature that spans across the typical qualitative and quantitative, positivist and humanist, art and design, applied and theoretical discourse divides. Additionally, a focus such as the production of actual systems can shift humanist writing from its usual position of critique (antithesis to a thesis, or criticism of something established and already posited, such as a social practice or institution, to frame critique in dialectical terms) and make it more suitable to synthesis, moving "beyond critique toward contribution"(Hayles, p.41), or from "reading to making"(Ramsay), or toward "critical making"(Ratto) to use some of the language that has emerged from digital humanities areas. To be sure, cultural theorists may be inclined to analyze what they consider to be dominant social practices, but in the realm of mobile technology social practice is continuously changing and cross-pollinating. Staring at screens seems to be more dominant than listening to headphones these days, and given that mobile devices today contain apps as well as files, we cannot really be sure what people are listening to on their mobile devices. With respect to soundscape, the movement from antithesis to synthesis (or from critique of an established order or practice, and towards new systems or practices) also aligns well with the classic concern of soundscape discourses with the positive production of new soundscapes, rather than simply negatively critiquing city noise as forms of annoyance or ecosystemic disruption. Each of the works featured here offers a particular design solution to the actual production of new soundscapes.

Interestingly, none of the websites or articles discussed here take up a theoretical orientation around what one can call a 'place vs. space' conceptual dynamic. GIS is a technology of space (i.e. empty abstract 'Newtonian' or 'Cartesian' uniform grid of time-stamped three-dimensionality) whereas geotagged field recordings and virtual route-specific proximity-zoned soundscapes are examples of what can be understood as activities place-making. While there is much general humanistic discourse around space and place, the makers of actual systems assume no a priori tension, point of departure or rhetorical import to this distinction, and instead utilize the technology of Cartesian spatial representation (whether in orbit around the planet, or as flows of data streams) in a rather straight-ahead manner, i.e. theoretically unburdened, toward personal, social and communal forms of virtual place making through virtual soundscape production.

It is also interesting to note that a few of these works, as a kind of default, assume that the production of virtual soundscape 'cinematizes' real space, transforming lived experience into a filmic dream. The act of interfering with the natural coupling of sound and image suggests to some designers to take this up as an explicit aesthetic, ranging from this soundtrack sensibility to the extreme of aiming for such a heightened degree of confusion as to require a guide at one's side for safety's sake!

Also worth noting is the plethora of systems that integrate user-uploaded content, and the fact that only Urban Remix takes up the design issue that any such system, let alone that fact that there are many such (which only makes the issue more pronounced) is unlikely to achieve sufficient density of sound events to be of general widespread interest and use. Many of these works take the notion of community or participatory online content creation simply as an abstract ideal that seldom is matched to any evaluation or discussion of actually achieved goals. Future developments of MAS should take into account the surfeit of systems and inversely related scarceness of content.

Surprisingly, only one system makes use of a prominent social media platform: SWAF's integration of Twitter. Given the major concern of this practice with user-generated content, it is remarkable that Facebook, for example, has not been connected to virtual soundscape production. We don't go to YouTube to see what millions of people have uploaded there; rather, we typically seek out something specific based on its search features, or follow a recommended link from another site. The interfaces for user uploaded field recordings do not mention search engine features, nor methods for following links from those we may know and whose tastes or interests we may trust, in the manner that one typically and actually uses social media.

Schafer's original notion of the 'acoustic design' was the development at the level of principles of production, not intervention at the one-to-one scale of controlling all the sounds that are a by-product of any physical activity that occurs on the earth's surface:


The acoustic designer may incline society to listen again to models of beautifully modulated and balanced soundscapes such as we have in great musical compositions. From these, clues may be obtained as to how the soundscape may be altered, sped up, slowed down, thinned or thickened, weighted in favor of or against specific effects. The ultimate endeavor is to learn how sounds may be rearranged so that all possible types may be heard to advantage– an art called orchestration. The outright prohibition of sound being impossible, and all exercises in noise abatement being consequently futile, these negative activities must now be turned to positive advantage following the indications of the new art and science of acoustic design.

Acoustic design does not, therefore, consist of a set of paradigms or formulae to be imposed on lawless or recalcitrant soundscapes, but is rather a set of principles to be employed in adjudicating and improving them. (Schafer, 238)


As we are mobile in the soundscape, we can augment our acoustic environment simply through our movement through it, so as not to be merely subject to what would otherwise be imposed on us by forces, practices, technologies and systems beyond our control. Whether we author these augmentations ourselves or download the systems of others, Schafer's ideal is increasingly not only within reach, but in our pockets or on our bodies. The coming phases of wearable development can build on these mobile precedents, which have already accomplished much by way of delineating the capabilities and limitations, possibilities and promises of acoustic design.


 

 

REFERENCES

Behrendt, Frauke (2012). The sound of locative media. Convergence: The International Journal of Research into New Media Technologies, 18(3), 283-295.

Hayles, Katherine (2012). How We Think. University of Chicago Press.

Ramsay, S. (2011). "On building." Blog entry. Available at: http://lenz.unl.edu/papers/2011/ 01/11/on-building.html

Ratto, M. (2011). "Critical making: Conceptual and material studies in technology and social life." Paper for Hybrid Design Practices workshop, Ubicomp, Orlando, Florida. 2009. Re-published in The Information Society: An International Journal 27:4, 252-260.

Schafer, R Murray (1993). The Soundscape: Our Sonic Environment and the Tuning of the World. Rochester, Vermont: Destiny Books.



 


 

Michael Filimowicz is Associate Dean in Lifelong Learning and a Senior Lecturer in the School of Interactive Arts and Technology at Simon Fraser University. He is founder of the conference and film festival Cinesonika and co-editor of the academic journal The Soundtrack. His research area interweaves the phenomenology of mediation and information semiotics toward the development of new forms of audiovisual and multi-modal display. He is also an internationally exhibiting new media artist and has published widely in journals such as Organised Sound(Cambridge University Press), Arts and Humanitues in Higher Education, Leonardo, Comparative Literature, and Semiotica.

 

 

 

 

HOME

ARTICLES

NET GALLERY (2003 - 2009)

BACK NUMBERS

CONTACT

 

FYLKINGEN'S NET JOURNAL

where no other claim is indicated.