This paper explores the possibility of
thinking of the human body as musical instrument. It
builds on the philosophy of phenomenology to discuss
body schemata that might be considered "instrumental"
and discusses the diversity of bodies proposed by body
theory to consider the incorporation of digital
technology. Concepts of embodied interaction from the
scientific field of human–computer interaction are
discussed with an eye toward musical application. The
history of gestural musical instruments is presented,
from the Theremin to instruments from the STEIM studio.
The text then focuses on the use of physiological
signals to create music, from historical works of Lucier
and Rosenboom to recent performances by the authors. The
body as musical instrument is discussed in a dynamic of
coadaptation between performer and instrument in
different configurations of body and technology.
Musical instrument performance solicits
the human body into interaction with an acoustic,
sound-producing object: the instrument. This engages the
performer in forms of corporeal interplay not just with
the instrument but also with the music being played, the
resulting physical sound, and the space in which it is
manifest. More than just a manipulation of mechanical
aspects of an instrument—pressing keys, closing holes,
or exciting strings — this interaction takes on a
visceral dimension. Brass instruments, for example,
offer acoustic resistance back to the player's lips,
helping them with intonation. This interlocking of
acoustic and body physiology takes on a phenomenological
dimension and can be thought of as a cybernetic
human-machine extended system. The idea of an extended
system goes beyond any architecture of technology and
becomes a set of McLuhan-esque "extensions," where the
medium and characteristics of the medium — in this case
sound — affects us (McLuhan 1964).
Technologies of whole-body interaction
today are widespread beyond music. Game controllers use
cameras and sophisticated computer-vision technology to
detect user movement for gameplay. Mobile phones are
equipped with inertial sensors that detect rotation and
orientation in three-dimensional space, allowing tilting
motions to scroll through lists and other forms of
interaction without touching the screen. These devices,
and the sensors in them, have been used by artists and
musician to create new musical instruments (Jensenius
and Lyons 2016). But despite the embodied interaction
that these digital technologies allow, do they create
the same, tight corporeal coupling that we observe with
the trumpet player?
Physiological sensors internalize the
otherwise exterior nature of movement detection by
directly sensing signals from the human body. Rather
than external sensors (such as gyroscopes) reporting on
the results of bodily gesticulation, biosensors are
electrodes placed directly on the body that report on
the corporeal activity at the source of body movement.
Originally used in analytical contexts in the biomedical
field to study muscle development, atrophy, or gait,
they are increasingly used in interactive contexts to
aid in stroke rehabilitation or even allow prosthetic
limb control. The digitization and miniaturization of
these technologies have allowed them to come out of the
medical laboratory to be used in everyday contexts,
including going on stage to be used in musical
performances. Could these technologies of physiological
sensing afford a connection between performer and
digital music system that recalls the visceral acoustic
coupling between performer and acoustic instrument? If
so, can we imagine configurations of such technologies
that would allow the body itself to be thought of as a
musical instrument?
This paper presents the history of
embodied interaction in music, leading up to
physiological sensing in contemporary experimental
musical practice. It draws on theories of the body and
philosophies of phenomenology (Merleau-Ponty 1962) in
proposing body schemata that might be considered
"instrumental." This prompts a definition of
"instrument," musical and otherwise. It also demands an
examination of our relationship to technology in order
to understand what part of the human-machine interface
becomes instrumental. Concepts of technological
embodiment in cultural studies of the body (Hayles 1999;
Haraway 1985) and embodied interaction in the scientific
field of human–computer interaction (Dourish 2004) are
thus considered to provide a cross-disciplinary view on
the issue. The paper ends by proposing two further
notions drawing on musical performance: coadaptation
(Mackay 2000), where the system learns the user while
the performer learns the system; and configuration
(Donnarumma 2012), where the capacity of the instrument
and the performer are interlinked and thus mutually
affect each other.
PROPRIOCEPTION
Proprioception is the mechanism that
allows the body to determine the position of neighboring
parts of the body and the effort exerted to perform a
physical gesture. Schmidt and Lee (1988) describe how
this is made possible by the integration of information
from a broad range of sensory receptors located in the
muscles, joints, and the inner ear. Proprioception is
situated between two other modes of self-perception
described by Merleau-Ponty: exteroception and
interoception. Exteroception organizes tactile
sensitivity to external objects, whereas interoception
organizes the sensitivity to the movement of the body's
internal organs.
According to Merleau-Ponty, "there is
not a perception followed by a movement, for both form a
system which varies as a whole" (1962, 111). Perception
and movement function together, constituting a delicate
balance between intention and performance, between the
movement as intended and as it actually occurs.
Proprioception can be thought of as a kind of
closed-loop motor control mechanism, where the body uses
the senses to compare the desired movement to the
performed motion, assessing a margin of difference. This
information aids in calibrating continuous movement, and
establishes forms of feedback where action and
perception continuously complement one another.
For Merleau-Ponty, proprioception can
be both conscious and preconscious. An example of
conscious proprioceptive mechanism is the case where one
touches the tip of the nose with the eyes closed. In
this case, one does not learn the position of the nose
through sight, but it is the sense of proprioception
that provides this information. On the other hand,
preconscious proprioception is demonstrated by a
"righting" reflex, an involuntary reaction that the
human body produces to correct body orientation when
falling or tripping. For instance, when one falls asleep
while sitting on a train, the head repeatedly tends to
fall on one side and the body moves the neck muscles
autonomously to correct the head position. These two
modes of proprioception show that "the body and
consciousness are not mutually limiting, they can only
be parallel" (1962, 124).
Playing musical instruments at first
glance would seem to be a clear example of
exteroception. However, the masterful playing of an
instrument requires more than the tactile sensitivity
for manipulating the mechanics of an instrument as
object. The parts of the body that are not in direct
contact with the instrument, and their position and
movement, can be as crucial (if not more) to tone
quality and expressive musical phrasing as the body
parts that actuate the mechanics of the instrument.
Control of the diaphragm is fundamental in wind
instrument performance and has a direct effect on the
sound quality achieved by the performer, and can also
lead to extended technique such as circular breathing.
This can be thought of as a case where interoception
becomes an important part of musical instrument
performance. Elbow position of the bowing arm on a
stringed instrument is fundamental not just for
producing good tone but also for avoiding physical
injuries such as carpal tunnel syndrome. This requires a
sense of proprioception for the instrumentalist to be
aware of and adjust the location of limb joints in free
space. With practice, these techniques for musical
instrument performance become increasingly fluid and
intuitive for the performer. Can we think of these as
conscious processes that, through repetition, enter the
preconscious of the musician? If technologies of
physiological interaction might allow the body, in the
absence of a physical object, to become itself the
musical instrument, exteroception may disappear
completely, leaving interoception and proprioception as
direct conduits to electronic sound production.
BODY SCHEMATA
To further understand the superposition
of conscious and preconscious factors determining human
movement in musical instrument performance, it is
worthwhile to look at the notion of body schemata. These
are schemes used by the body to govern posture,
movement, and the use of physical implements. Body
schemata can be thought of as motor control programs.
Merleau-Ponty gives an example by observing the case of
a blind man's white cane. For Merleau-Ponty, the stick
is not an external object to the person who carries it,
but rather, for its owner the stick is a physical
extension of touch. The stick becomes an additional
source of information on the position of the limbs, so
with continuous training it becomes integrated into the
body schema. It is converted into a sensitive part of
the body, or extends the senses, and becomes part of the
blind person's proprioceptive sense and motor programs.
Interestingly, Merleau-Ponty (1962,
145–146) also gives the example of musical instrument
performers, describing the case of the organist. When
rehearsing for a performance with a new organ, the
organist, according to Merleau-Ponty, does not commit to
memory the objective position of pedals, pulls, and
stops. Rather, she incorporates the way in which given
articulations of pedals, pulls, and stops let her
achieve given musical or emotional values. Her gestures
draw "affective vectors" mediating the expressiveness of
the organ through her body. The organist does not
perform in an objective space, but rather in an
affective one. Body schemata constitute "knowledge in
the hands," in the words of Merleau-Ponty. This form of
corporeal epistemology is the basis of "enactive
knowledge" proposed by both the psychologist Jerome
Bruner (1968) and the scientist Francisco Varela
(Varela, Thompson, and Rosch 1992).
In musical performance, body schemata
drive the way the performer physically interacts with
the instrument in accord with the musical or affective
expression that the instrument affords. The instrument
goes beyond the blind person's stick in the sense that
it is more than an extension of perception. It also goes
beyond the intuitive sense Merleau-Ponty's organist has
in moving between different organs of different
dimensions. A musical instrument invokes body schemata
in ways that extend the human body's expressive
potential in the projection of sound. The instrument
becomes an extension in the sense of McLuhan, where a
medium, or technology, shapes the relationship between
us and the world in which we live. Can we, with
electronic musical instruments, go beyond the affective
dimension Merleau-Ponty describes in the organist? If
physiological sensing technologies extend the sonic
expressive potential of the performer's body directly
without an intervening stick or organ, where does the
enactive knowledge reside? In the sound itself?
THE TECHNOLOGICAL BODY
Body theory provides one lens, through
a cultural analysis of the body. By going beyond
Cartesian mind-body dualism to regard the body not just
in its physiological manifestation, theories of the body
accommodate a plurality of bodies—bodies that can be
natural, social, material, or immaterial (Blackman
2008). Part of the diversity of potential bodies is as
an entity enmeshed with technology. Haraway (1985) and
Hayles (1999) provide interwoven definitions of what it
means to be human in the increasingly intimate relations
between humans and "machinic" or computational
technologies, with Hayles proposing the notion of the
posthuman body. Haraway analyzes the notion of the
cyborg— a cybernetic organism part human and part
machine—in contemporary science fiction and modern
medicine (1991). In her view, the cyborg blurs the
humanist models of a unitary gender, species, or human
nature by placing humans, living beings, and machines on
the same ontological ground. Haraway conceives of living
(human, animals, plants) and nonliving beings (machinic
and computational technologies) as unbounded entities or
cross-species characterized by a blurring of boundaries.
From there, it follows that the body is not bounded by a
membrane such as skin (the dermis) but that human and
nonhuman bodies meld continuously—they are "taken apart
and put together" in the realization of hybrid living
entities. Hayles's reading of the posthuman is based on
a twofold premise of the meaning of embodiment. On one
hand, embodiment signifies the material instantiation of
an organism in a given context; on the other, it
signifies the information pattern that an organism
yields (1999, 2). In this view, both human being and
computer code can be considered to be embodied. They are
both situated—their existence is meaningful in a
specific context, be it a city or a computer, and they
yield and share information patterns. For Hayles, human
beings and machines share information patterns through
their interaction, and in so doing, they constitute each
other's modalities of embodiment. The posthuman
discourse posits that technologies can be seen not
simply as extending the body, but as hybridizing with
it. This can be achieved at a number of different
levels: through mediating interfaces, or directly though
biomedical technologies. Technologies of human–machine
interaction digitally mediate relationships between the
body and the surrounding environment. The advancement of
biomedical technologies has facilitated the development
of physiological interfaces, making this link more
direct with the lower level functions of the human body.
This has captured the imagination of musicians, from
making electronic musical instruments that capture the
spatial perturbation of human movement to detecting
lower level bodily function as a musical source—perhaps
coming closest to the sense of body as instrument.
EMBODIED INTERACTION
If the broad deployment of new human
interface devices (including the computer mouse)
represented the development of computer–human
interaction in the 1980s, the development of interfaces
like gloves and motion capture systems in the 1990s
brought interest in whole body and embodied interaction.
The exponential increase in processing speed, and the
possibility to render digital media in (or faster than)
real time, changed not only the forms of interaction
possible with computing machines but also the contexts
in which they were used.
Embodied interaction, for Dourish
(2004), is one in which computers go beyond
metaphorical, representational connections to the
physical world, but begin to function in ways that draw
on how we as humans experience the everyday world.
Embodiment seen from this perspective is not just
corporeal but also reflects states of participation in
the wider world. This parallels, perhaps, the expansive
conception of the body we noted in the previous section.
In outlining the challenges of embodied human–computer
interaction, Dourish draws on philosophies of
phenomenology, of Heidegger and Merleau-Ponty, again
questioning Cartesian mind/body dualism. He uses a
phenomenological framework to look at examples of
ubiquitous, tangible, and social computing, offering a
theoretically informed analysis of the otherwise
technologically deterministic development of the
technologies that now constitute the digital nature of
contemporary life.
Dourish evokes Heidegger's term of
"Being in the world" as a nondualistic way to think
about our interaction with digital technology. He starts
with the simple notion of familiarity and the tendency
of tangible and social computing to exploit familiar
real- world situations and metaphors to set the scene
for his definition of embodied interaction. If we
inhabit our bodies, which in turn inhabit the world
around us, then computing systems, for Dourish, should
not just present metaphors for interaction but also
become mediums of interaction. He continues, by drawing
on Heidegger's notions of zuhanden and vorhanden
("ready to hand" and "present at hand"), to think
about the transparency of computer interfaces, and
considers whether they were the object of attention, or
a medium facilitating another action. If a computer
interface, such as a mouse, becomes the object of our
attention in an activity, it is vorhanden (present
at hand). If, on the other hand, the device itself
becomes transparent, and we act through the device as an
extension (in the McLuhan-esque sense) of our body in
realizing an action, it is zuhanden (ready to
hand).
The application of Heidegger's concepts
to human–machine interaction is potentially fruitful
when extended to music and embodied interaction with
musical instruments, be they technological or not.
Learning a musical instrument (or any new piece on an
instrument) typically requires an investment of time,
and concentration on its mechanics, posture, and
fingerings. This can be thought of as vorhanden
— focus on the object. After mastery of an instrument or
with increasing confidence in a particular piece,
accomplished performers describe "speaking" through the
instrument, where they become just the vehicle through
which the music speaks. These situations, we claim, are
the transfer of instrument as interface from vorhanden
to zuhanden. Is this the affective
sense of Merleau-Ponty's organist? Could this be a
useful way to consider the potential of physiological
sensing technologies to turn the human body itself into
an instrument?
THE BODY IN ELECTRONIC MUSIC
Despite the potential of electronic
media providing extensions to the human capability for
producing music, electronic and computer music, at first
glance, have traditionally not implicated the human body
in visceral ways. Large recording studios and modular
electronic music synthesizers full of racks
interconnected by patch cables are not portable in ways
that we assume musical instruments need to be. Computer
music programming languages, despite recent movements
such as "live coding," were not originally conceived for
physical performance. We tend to think of these
technologies of music production, be they analogue or
digital, as disembodied technologies. However, Hayles's
notion of sharing information patterns indicates that
things may not be so simple. Indeed, the broad
assumption that digital tools for music production are
disembodied is immediately proven false in uncovering
the history of gestural and embodied electronic music.
Clara Rockmore performing on
the Theremin in the 1930s
One of the earliest, and most iconic,
instruments in the history of electronic music is
gestural. The Theremin, invented by the Russian
scientist Lev Termen (Leon Theremin) in 1920, used two
antennas and electrical circuitry converting electric
field perturbations into musical tones to create an
electronic musical instrument that was played by waving
the two hands. Similar instruments, such as the Ondes
Martenot and the Trautonium, were also invented in the
early twentieth century (Hopkin 1997). This approach to
detecting performer gesture for live electronic
performance continued in the 1980s. Michel Waisvisz
conceived of the instrument, The Hands, and
other similar instruments at the Studio for
Electro-Instrumental Music (STEIM) in Amsterdam. In 1991
Laetitia Sonami combined early glove-based technologies
from first-generation virtual reality and early video
gaming with domestic utility and fashion imagery to
build a series of instruments called the lady's
glove. Both The Hands and the lady's glove predated
consumer electronics equipped with motion capture and
used custom sensor electronics to transform performer
movements into Musical Instrument Digital Interface
(MIDI) data, allowing the performer on stage to
articulate and sculpt electronic and digital sound
synthesis (Krefeld and Waisvisz 1990).
Michel Waisvisz performing on The
Hands
Laetitia Sonami performing on
the Lady's Glove
This early work has spawned communities
of practice in the area of new instrument building, or
digital musical instruments (DMIs). The research area
that encapsulates these practices is the field of new
interfaces for musical expression (NIME), where
interactive technologies are used to build electronic
music devices that could be performed live, through
gestural interaction, in an instrumental manner.
However, for the most part, the instruments produced
are, like traditional instruments, external objects for
the performer to hold and manipulate—instruments through
which music is performed.
Alvin Lucier performing Music
For Solo Performer
In 1965, the composer Alvin Lucier met
the scientist Edmond Dewan, who was conducting research
on electroencephalogram (EEG) signals in relation to
aircraft pilots' epileptic fits. Using hospital
electrodes on a headband and analog circuitry to amplify
alpha waves resulting from relaxed, meditative states,
Lucier created a seminal work of biomusic, Music
for Solo Performer. In the piece, the performer
sits center stage wearing sensors around the head. The
output of the brainwave amplification circuits drives
loudspeakers connected to percussion instruments,
exciting them and causing them to sound, following the
alpha rhythm (8–13 Hz) of the brainwave production of
the performer. Around the same time,
David Rosenboom, in the early 1970s, also worked with
brainwaves, in his case, connecting the signal from
the brain into the signal path of an analog modular
synthesizer (Rosenboom 1976).
Did these works and experiments turn
the human body into a musical instrument? Or did they
begin to explore other experiential notions for the
performance of music that did not depend on volitional
acts of an instrumentalist?
DIGITAL BIOSIGNAL INTERFACES FOR
MUSIC
The miniaturization of transistor
electronics in integrated circuits along with the
increasing practicality of analog-digital conversion and
availability of digital signal processors such as the
Motorola 56000 brought about a step change in
physiological interface development in the late 1980s
and early 1990s. While the facility of prototyping speed
and convenience did not match the ease of present-day
platforms like the Arduino, or the flexibility of
programmable microcontrollers, advances in electronics
did allow engineers and small companies to create
custom, digital biosignal interfaces for the arts.
The adoption of MIDI as a
communications bus for synthesizers provided a standard
for intercommunication between musical devices, and a
data format for the representation of performer
actions—discrete actions in the form of note on/off
messages, and gestures in the form of continuous
controller messages. Early digital biosignal interfaces
for the arts included the BioMuse, the BodySynth, and
the IBVA. The BioMuse was created by Ben Knapp and Hugh
Lusted, researchers at Stanford University's Center for
Computer Research in Music and Acoustics (CCRMA), and
was first shown publicly in 1990. It was a
digital-signal processing (DSP) based biosignal-MIDI
interface that read biosignals from the heart (EEG),
eyes (electro-oculogram, or EOG), and muscle
(electro-myogram, or EMG) to output MIDI based on user
definable mappings (Lusted and Knapp 1988). The
BodySynth, created in 1991 by Chris Van Raalte and Ed
Severinghaus was an EMG music interface used by
performance artists such as Pamela Z and Laurie
Anderson. The Interactive Brainwave Visual Analyzer
(IBVA, 1991) was created by Masahiro Kahata and was an
early low-cost, portable digital EEG device for art.
More recently, in order to provide an alternative to the
MIDI gestural interfaces described already, Donnarumma
has created the open-source instrument XTH Sense (2011).
This detects mechanomyogram (MMG) signals through a
compression cylinder and microphone, allowing biosignals
to interface with computer systems through audio
interfaces.
With the exception of the last example,
these devices can be contextualized in the era from
which they came—early digital technologies, MIDI-based
control paradigms in electronic music, first-generation
virtual reality, and human–machine interface research.
Interest in new interfaces for computing accompanied the
early success of the Apple Macintosh and the mouse in
1984. While the computer mouse had been invented in 1964
and had been used in specialist systems in the 1970s, it
was its mass appeal and ease of use in a personal
computer that captured the public imagination for new,
engaging forms of interaction. The late eighties was
also the first wave of virtual reality technology, and
included peripherals like head-mounted displays and
glove-based interfaces. There was an interest in all
things virtual, with interfaces allowing the user to
navigate these parallel, digital "realities." In this
spirit, biosensor musical interfaces became the basis
for possible "virtual instruments." In this case, the
question was not whether the body itself became
instrument, but whether, thanks to sophisticated human
interface technology, it allowed the musician to create
virtual instruments that were immaterial and purely
digital.
The human–machine interface paradigm in
this time categorized computer peripherals as input
devices or output devices. The former were the keyboards
and mouses, and the latter screens, speakers, and
printers. Following this paradigm, these musical
interfaces were input devices—interfaces meant to
capture human action to translate as an input to command
a computer system. The conception of MIDI as a protocol
for music also paralleled this control paradigm, in a
master/slave configuration. Seen from this perspective,
new interfaces for music were thought of as controllers
with which to dictate digital sound production by way of
human action. We explore in what follows how this
controller paradigm has evolved and changed in the time
since.
Kagami ("Mirror" in
Japanese) was the first concert piece for BioMuse,
composed and performed in 1991 by Atau Tanaka (see
Tanaka 1993). It used two channels of electromyogram
(EMG — a stochastic series of electrical pulses
resulting from neuron spiking), one on each forearm to
track muscle tension of the performer. The BioMuse
performed envelope following on the raw EMG signal and
translated muscle tension intensity to continuous MIDI
controller values. This transformed the performer's body
into a MIDI instrument to control digital synthesizers.
Instead of the dials, faders, and ribbons that typically
generated MIDI continuous control data, the BioMuse
captured concentrated, free space arm gesture. The MIDI
data reflecting muscle tension of the performer was
mapped to different parameters of frequency modulation
(FM) and waveshaping vector (WS) sound synthesis on
Yamaha and Korg synthesizers. A score, in the form of
interactive software, ran on an onstage computer. The
software determined the mapping assignment as well as
ranges—specifying which synthesis parameter would be
controlled by which muscle and how the MIDI control
values, ranging from 0 to 127, would be mapped to
salient musical values. Oscillator values might be
specified as frequency in hertz or in MIDI values for
diatonic notes. Modulation and waveshaping indexes would
operate on a different set of ranges. In addition to
defining controller mappings, the score software set the
structure of the piece—the sequence of mappings that
would be performed by the musician. Rather than imposing
a fixed timeline, each section of the piece was invoked
by combinations of muscle gestures. The score would set
muscle tension thresholds as triggers. Certain
combinations of triggering across the two arms (one
followed by another in time, or the two simultaneously)
would trip the triggers, advancing the composition to
the next section, setting parameters and sequences of
melodies which would then be sculpted by the mapped
controller data. Building on the idea of sound as mirror
for the body, and drawing on the Japanese reference in
the title, the sounds of the piece used synthetic
approximations of evocative sounds, starting with low
throat-singing voices, punctuated by taiko drums,
leading to rapid melodies of bell tones, finishing with
siren-like breaths and odaiko percussion.
RECENT PERFORMANCE PRACTICE
In this section, we describe three
works by the authors that parallel this evolution. They
respond not just to developments in technology, but
demonstrate conceptual shifts in the relation of the
performer's body and technology in musical performance
practice.
Atau Tanaka performing Kagami
on the BioMuse in 1993 at _V2,
Rotterdam.
Photo: Jan Sprij
Ominous (2013) is a sound
sculpture generated and manipulated live by a performer.
In this piece, created by Marco Donnarumma, music is
produced in real-time using the mechanomyogram (MMG,
acoustic vibrations emitted by muscular tissue). A
computer program digitizes the raw signals in the form
of audio input and makes it available for live sampling.
The piece is based on the metaphor of an invisible
object in the player's hands that is made of malleable
sonic matter. Similar to a mime, the player models the
object in empty space by means of whole-body gestures. A
column of red light illuminates the performer's hands.
The muscle sounds produced by the contractions of the
performer's muscles are amplified, digitally processed,
and played back through a circular array of eight
subwoofers and eight loudspeakers. The MMG signal of the
left bicep flows through a four-stage digital signal
processing (DSP) system, whose parameters are driven by
synced contractions of the right forearm muscle, with
each DSP stage sending its resulting signal to one of
the loudspeakers. This creates a multilayered sound
where disparate sonic forms can be precisely shaped by
coordinating and fine-tuning whole- body gestures that
address one or more DSP stages at once. The interplay
between instrument and performer relies not only on the
MMG sonification but also on strategies of interaction
that include extracting expressive features from the
muscle sounds, mapping dynamically those features to DSP
parameters, and composing the piece sections in real
time using neural networks. The MMG sensors on the
forearms are analyzed for high-level features, such as
abruptness, subtleness, or rhythm of the player's
movement. According to these features, the muscle sounds
are digitally processed, played back, and spatialized. A
neural network compares the stream of muscle sound
features with patterns it has learned offline, detects
the player's current muscular state, and subsequently
loads a new set of mappings and activates or deactivates
specific DSP chains, effectively changing the gesture
mapping definitions throughout the performance based on
the performer's dynamics. Together, the natural muscle
sounds and their digital and virtual extensions blend
together into an unstable sonic object. As the listeners
imagine the object's shape by following the performer's
physical gestures molding the red light, a kind of
perceptual coupling enables listeners to hear and feel a
sculpture that cannot be seen.
Ominous by Marco Donnarumma
Incarnated Sound Sculpture
(Xth Sense Biosensing Wearable Technology)
Myogram (2015) is a recent
EMG work by Tanaka, and uses a commercially available
EMG interface. Two such devices (one on each forearm)
each report 8 channels of EMG, providing 16 total
channels across the two arms, giving a relatively
detailed differentiation of muscle groups around the
forearms that are invoked in manual gestures such as
wrist rotation, hand flicking, and finger movement. In Myogram,
the raw EMG data is heard through a process of direct
sonification, making musical material out of the
corporeal signal where electricity generated by the body
provides the sounds heard in the work. The signals from
pairs of EMG channels are routed to an individual
speaker in an octaphonic sound system. The sensors
reporting muscle tension on the ring of muscles on the
left arm are heard on four speakers in the corners of
the wall, stage left, from the front to house to back of
the concert hall, from below stage up to the ceiling.
Likewise, the eight EMG channels on the right arm are
routed to four speakers in the corners of the wall,
stage right. By making rotating gestures of the wrists,
the muscles in the perimeter of the forearm are
solicited in circular sequence. This is heard in the
concert hall as spatial sound trajectories of neuron
spikes projected in the height and depth of the space,
with lateral space divided in the symmetry of the body.
Through the composition of the work, these raw
physiological signals are subjected to different signal
processing treatments. Low pass filters (LPFs) first
tune the contour of the sound to de-emphasize the
high-frequency transients of the neuron spikes to focus
on the stochastic low- frequency fundamental. Ring
modulators allow sum and difference frequencies of the
EMG data relative to reference frequencies to be heard.
Resonators set up resonant filters tuned to specific
frequencies, which are excited by the EMG data.
Frequency shifters transpose the raw muscle signal data
by musical intervals. The performer responds to these
different sonic contexts for the sound generated by his
own body by gesticulating in space to create
physiological signal output that "plays" the space, the
filters, and resonators.
Myogram
by Atau Tanaka 2015
CONTROL, COADAPTATION, AND
CONFIGURATION
This range of work, from the historical
works of the analogue era in the 1960s to today's
digital era, demonstrates our changing relationships to
technology, notions of the interface, and ultimately
evolving visions of the body. The early work of Lucier
and Rosenboom sought to track the performer's state and
reflect that in the music. By tracking involuntary
action, they focused on forms of biofeedback, reflecting
the spirit of the sixties era in which these works were
conceived. Other works from the time that exemplify this
spirit include John Cage's Variations VII,
performed at Experiments in Art and Technology
(E.A.T.)'s seminal 9 Evenings series in 1966. In this
piece, Cage created a large-scale work of an environment
of multiple sound sources—live telephone lines of the
city, radio, and space signal receivers, for performers
to "tune in." This list of sound sources includes what
Cage described as "Body" to join the sonification of
communication waves and urban activity. David Tudor's Rain
Forest (1968), while not explicitly using body
signals, perhaps demonstrates the environmental links
imagined at the time between technology and the
surrounding world. In Rain Forest, sound
transformations, of the sort inspired by electronic
music, were realized without the use of electronics and
instead by transmitting sound sources through resonant
physical materials. Both look at forms of indeterminacy
as a way of "being in the world."
In contrast, the work of the 1990s
reflects the human-interface era. At that time,
significant advancements were made in computer science
in the elaboration of novel input devices. This
zeitgeist influenced the development of DMIs of that
era, many of which were conceived of as new musical
controllers. Bio-interfaces, including the BioMuse and
Bodysynth, were thus conceived of as control inputs to a
computer music system, designed to allow a performer to
control musical output through corporeal activity. There
is a shift in paradigm, from biofeedback in the 1960s to
biocontrol in the 1990s.Kagami, as described
earlier, represents this era. In this regard, the
proposition of body-as-instrument in this work plays
into a technicity of electronic music, afforded by the
MIDI communications protocol, of networking and
interoperation of synthesizers. The MIDI specification
was conceived on a master/slave model, where the default
master controller imagined was a piano keyboard. The
MIDI controllers built on other instrumental metaphors,
such as woodwind or guitar controllers, were dubbed
"alternate controllers." The BioMuse, as physiological
MIDI device, proposed the body as MIDI controller,
ostensibly with the richness of biosignals opening up
new possibilities for synthesizer performance. While the
concept body as instrument in this era sought to make
these relationships more visceral and organic than the
coldness of button presses and slider moves, it remained
limited to a unidirectional control metaphor, one of
first- order cybernetics (Wiener 1948).
Myogram moves away from
control to a form of coadaption between body and
sensitive system. The system in its transparency allows
the body in its pure state to be heard. By filling the
concert space with sound in abstracted morphogenic
dimensions — the left and right walls reflecting the two
halves of the body, respectively — the gestures of the
performer enter into interaction with the space of the
concert hall. This creates a coadaptation of instrument
and performance venue in spatial terms, amplifying the
topology of the body, but also creating a direct
interaction between the performer's body and the
acoustic space shared with the audience. The system
becomes less transparent as the piece progresses, taking
on resonant characteristics. To play these resonances
musically, the performer must find the best gestures to
enter into resonance — the "wrong" gesture might create
excessive feedback or not excite the system at all,
depending on the propagation of sound waves in the
specific dimensions of the hall. By playing the space,
and the interactive signal processing system, the body
as instrument is extended to integrate the acoustic
space inhabited by both performer and audience.
In Ominous, the continuous
changes of the player's physiological state and the way
in which the instrument reacts to those changes are both
crucial to the performance. It is a relationship of
configuration, where specific properties of the
performer's body and those of the instrument are
interlaced, reciprocally affecting one another. The
gesture vocabulary, sound processing, time structure,
and composition are progressively shaped, live, through
the performer's effort in mediating physiological
processes and the instrument's reactions to, and
influence on, that mediation. The unpredictability of
this relationship makes the performance both playful and
challenging. The music being played is not digitally
generated by the instrument nor fully controlled by the
performer. Instead, the music results from the
amplification and live sampling of the player's bodily
sounds. As a performance strategy, it blurs the notion
of control by the player over the instrument,
establishing a different relationship among them, one in
which performer and instrument form a single
technological body, articulated in sound and music.
Here, an expanded definition of the body parallels the
extended notion of the instrument.
CONCLUSIONS
This paper presented the idea of the
body as instrument as a notion relying on a multifaceted
and cross-disciplinary set of resources, ranging from
cultural studies of the body to human–computer
interaction, from phenomenology to musical performance.
The idea of the human body as musical instrument is
shown to be a malleable idea changing across cultural
contexts and technological advancements. The
understanding of the body as instrument varies according
to both the degree of technological intervention and the
aesthetic concerns of individual artistic practices. The
mechanisms of proprioception, the functioning of body
schemata, and the broader understanding of the
phenomenological basis of human embodiment, show the
human body as inherently open to become, integrate, or
combine with an instrument. Physiological technologies
for musical performance hold a potential for the body to
viscerally interact with machines. This creates
interactions between the performer and system, performer
and space, and performer and audience, as forms of
musical expression that are embodied, adaptive, and
emergent. This enables novel ways of exploring sound and
space, linking sonic stimuli and spatial resonances with
the physiological processes of a musician in
performance.
From feedback to control, coadaptation
to configuration, the potential for body as musical
instrument has evolved in a half century of practice.
These developments are technological and conceptual, and
intertwined. The evolution reflects different levels of
integration and fusion of body and technology, but the
trajectory is not a linear one. The work of the sixties
reflects an ideal of integration through feedback where
the technology is "ready to hand," or a transparent
vehicle between body and environment. The interfaces of
the nineties, though an intention to master technology,
belie a greater distance between body and technology.
The control interfaces are "present at hand," objects
focusing attention to carry out a musical task. The body
as instrument is technical, cybernetic, and imbued in a
musical culture of virtuosity. The recent work extends
the boundaries of the body to include the concert space,
and imagined editable configurations of body and
semiautonomous signal processing. This reflects forms of
second-order cybernetics (Glanville 2002), where an
organismic view replaces system architecture, and where
noise and positive feedback temper entropy. By proposing
an expanded body that encompasses technology and space,
these last examples respond to the possibility of the
posthuman body, proposing hybridized bodies as
instruments beyond the extent of the unitary corporeal
body.
REFERENCES
Blackman, Lisa. The Body (Key
Concepts). Oxford: Berg, 2008
Bruner, Jerome. Processes of
Cognitive Growth: Infancy. Worcester, MA: Clark
University, 1968
Dourish, Paul. Where the Action Is:
The Foundations of Embodied Interaction.
Cambridge, MA: MIT Press, 2004
Glanville, Ranulph. "Second Order
Cybernetics" In Encyclopedia of Life Support
Systems, edited by Georges Feller and Charles
Gerday, 59–86. Oxford: EoLSS Publishers,2002
Haraway, Donna. "Manifesto for Cyborgs:
Science, Technology, and Socialist Feminism in the
1980s." Socialist Review 80: 65–108, 1985
Haraway, Donna. "A Cyborg Manifesto:
Science, Technology, and Socialist Feminism in the Late
Twentieth Century." In Simians, Cyborgs and Women:
The Reinvention of Nature, 149–181. New York:
Routledge, 1991
Hayles, N. Katherine. How We Became
Posthuman: Virtual Bodies in Cybernetics, Literature,
and Informatics. Chicago: University Of Chicago
Press, 1999
Hopkin, Bart. Gravichords, Whirlies
and Pyrophones. London: Batsford, 1997.
Krefeld, Volker, and Michel Waisvisz.
"The Hand in the Web: An Interview with Michel
Waisvisz." Computer Music Journal 14: 28–33,
1990.
Lusted, Hugh S., and R. Benjamin Knapp.
"Biomuse: Musical Performance Generated by Human
Bioelectric Signals." Journal of the Acoustical
Society of America 84: S179, 1988
Merleau-Ponty, Maurice.Phenomenology
of Perception. London: Routledge, 1962
Jensenius, A. R., and M. Lyons, eds. A
NIME Reader. Berlin: Springer,2016
Mackay, Wendy E. "Responding to Cognitive
Overload: Coadaptation between Users and Technology." Intellectica
30: 177–193, 2000
McLuhan, Marshall Understanding
Media: The Extensions of Man. New York: McGraw-
Hill, 1964.
Rosenboom, David. Biofeedback and
the Arts, Results of Early Experiments.
Vancouver, BC: Aesthetic Research Centre of Canada, 1976
Schmidt, A. Richard, and Tim Lee. Motor
Control and Learning. 5th ed. Champaign, IL:
Human Kinetics, 1988
Tanaka, Atau. "Musical Technical Issues
in Using Interactive Instrument Technology with
Application to the BioMuse." In Proceedings of the
International Computer Music Conference: September
10–15, 1993, edited by Sadamu Ohteru. San
Francisco: International Computer Music Association,
1993
Varela, F., E. Thompson, and E. Rosch. The
Embodied Mind: Cognitive Science and Human Experience.
Cambridge, MA: MIT Press, 1992
Atau Tanaka uses muscle sensing
with the EMG signal in musical performance
where the human body becomes musical
instrument. He studies our encounters with
sound, be they in music or in the everyday, as
a form of phenomenological experience. Atau's
first inspirations came upon meeting John Cage
during his Norton Lectures at Harvard and
would go to on re-create Cage's Variations VII
with Matt Wand and :zoviet*france:. He formed
Sensorband with Zbigniew Karkowski and Edwin
van der Heide and in the Japanoise scene he
has played alongside artists like Merzbow,
Otomo, KK Null and others. He has been
artistic co-director of STEIM, teacher at
IRCAM, and researcher at Sony CSL. He is
professor at Goldsmiths in London.
Marco Donnarumma is an Italian
performance artist, new media artist and
scholar based in Berlin. His work addresses
the relationship between body, politics and
technology. He is widely known for his
performances fusing sound, computation and
biotechnology. Ritual, shock and entrainment
are key elements to his aesthetics. Donnarumma
is often associated with cyborg[4] and
posthuman artists and is acknowledged for his
contribution to human-machine interfacing
through the unconventional use of muscle sound
and biofeedback. He is a Research Fellow at
Berlin University of the Arts in collaboration
with the Neurorobotics Research Lab at Beuth
University of Applied Sciences Berlin.