Research at NIME has incorporated embodied perspectives from design and HCI communities to explore how instruments and performers shape each other in interaction. Material perspectives also reveal other more-than-human factors’ influence on musical interaction. We propose an additional, currently unaddressed perspective in instrument design: the influence of the body not only the locus of experience, but as a physical, entangled aspect in the more-thanhuman musicking. Proposing a practice of “Body Lutherie”, we explore how digital instrument designers can honour and work with living, dynamic bodies. Our design of a breathbased vocal wearable instrument incorporated uncontrollable aspects of a vocalist’s body and its physical change over different timescales. We distinguish the body in the design process and acknowledge its agency in vocal instrument design. Reflection on our co-design process between vocal pedagogy and eTextile fashion perspectives demonstrates how Body Lutherie can generate empathy and understanding of the body as a collaborator in future instrument design and artistic practice.
Shifting Ambiguity, Collapsing Indeterminacy: Designing with Data as Baradian Apparatus
Courtney N. Reed, Adan L. Benito, Franco Caspe, and 1 more author
This paper examines how digital systems designers distil the messiness and ambiguity of the world into concrete data that can be processed by computing systems. Using Karen Barad’s agential realism as a guide, we explore how data is fundamentally entangled with the tools and theories of its measurement. We examine data-enabled artefacts acting as Baradian apparatuses: they do not exist independently of the phenomenon they seek to measure, but rather collect and co-produce observations from within their entangled state: the phenomenon and the apparatus co-constitute one another. Connecting Barad’s quantum view of indeterminacy to the prevailing HCI discourse on the opportunities and challenges of ambiguity, we suggest that the very act of trying to stabilise a conceptual interpretation of data within an artefact has the paradoxical effect of amplifying and shifting ambiguity in interaction. We illustrate these ideas through three case studies from our own practices of designing digital musical instruments (DMIs). DMIs necessarily encode symbolic and music-theoretical knowledge as part of their internal operation, even as conceptual knowledge is not their intended outcome. In each case, we explore the nature of the apparatus, what phenomena it co-produces, and where the ambiguity lies to suggest approaches for design using these abstract theoretical frameworks.
Explainable AI and Music
Nick Bryan-Kinns, Berker Banar, Corey Ford, and 3 more authors
In Artificial Intelligence for Art Creation and Understanding, Jul 2024
The field of eXplainable Artificial Intelligence (XAI) has become a hot topic examining how machine learning models such as neural nets and deep learning techniques can be made more understandable to humans. However, there is very little research on XAI for the arts. This chapter explores what XAI might mean for AI and art creation by exploring the potential of XAI for music generation. One hundred AI and music papers are reviewed to illustrate how AI models are being explained, or more often not explained, and to suggest some ways in which we might design XAI systems to better help humans to get an understanding of what an AI model is doing when it generates music. Then the chapter demonstrates how a latent space model for music generation can be made more explainable by extending the MeasureVAE architecture to include explainable attributes in combination with offering real-time music generation. The chapter concludes with four key challenges for XAI for music and the arts more generally: i) the nature of explanation; ii) the effect of AI models, features, and training sets on explanation; iii) user centred design of XAI; iv) Interaction Design of explainable interfaces.
Sonic Entanglements with Electromyography: Between Bodies, Signals, and Representations
Courtney N. Reed, Landon Morrison, Andrew P. McPherson, and 2 more authors
In Proceedings of the 2024 ACM Designing Interactive Systems Conference, Jul 2024
This paper investigates sound and music interactions arising from the use of electromyography (EMG) to instrumentalise signals from muscle exertion of the human body. We situate EMG within a family of embodied interaction modalities, where it occupies a middle ground, considered as a “signal from the inside” compared with external observations of the body (e.g., motion capture), but also seen as more volitional than neurological states recorded by brain electroencephalogram (EEG). To understand the messiness of gestural interaction afforded by EMG, we revisit the phenomenological turn in HCI, reading Paul Dourish’s work on the transparency of “ready-to-hand” technologies against the grain of recent posthumanist theories, which offer a performative interpretation of musical entanglements between bodies, signals, and representations. We take music performance as a use case, reporting on the opportunities and constraints posed by EMG in workshop-based studies of vocal, instrumental, and electronic practices. We observe that across our diverse range of musical subjects, they consistently challenged notions of EMG as a transparent tool that directly registered the state of the body, reporting instead that it took on “present-at-hand” qualities, defamiliarising the performer’s own sense of themselves and reconfiguring their embodied practice.
Auditory imagery ability influences accuracy when singing with altered auditory feedback
Courtney N. Reed, Marcus Pearce, and Andrew McPherson
In this preliminary study, we explored the relationship between auditory imagery ability and the maintenance of tonal and temporal accuracy when singing and audiating with altered auditory feedback (AAF). Actively performing participants sang and audiated (sang mentally but not aloud) a self-selected piece in AAF conditions, including upward pitch-shifts and delayed auditory feedback (DAF), and with speech distraction. Participants with higher self-reported scores on the Bucknell Auditory Imagery Scale (BAIS) produced a tonal reference that was less disrupted by pitch shifts and speech distraction than musicians with lower scores. However, there was no observed effect of BAIS score on temporal deviation when singing with DAF. Auditory imagery ability was not related to the experience of having studied music theory formally, but was significantly related to the experience of performing. The significant effect of auditory imagery ability on tonal reference deviation remained even after partialling out the effect of experience of performing. The results indicate that auditory imagery ability plays a key role in maintaining an internal tonal center during singing but has at most a weak effect on temporal consistency. In this article, we outline future directions in understanding the multifaceted role of auditory imagery ability in singers’ accuracy and expression.
Base and Stitch: Evaluating eTextile Interfaces from a Material-Centric View
L. Vineetha Rallabandi, Alice C. Haynes, Courtney N. Reed, and 1 more author
In Proceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction, Feb 2024
Fabrics are seen as the foundation for e-textile interfaces but contribute their own tactile properties to interaction. We examine the role of fabrics in gestural interaction from a novel, textile-focused view. We replicated an eTextile sensor and interface for rolling and pinching gestures on four different fabric swatches and invited 6 participants, including both designers and lay-users, to interact with them. Using a semi-structured interview, we examined their interaction with the materials and how they perceived movement and feedback from the textile sensor and a visual GUI. We analyzed participants’ responses using a joint, reflexive thematic analysis and propose two key considerations for research in e-textile design: 1) Both sensor and fabric contribute their own, inseparable materiality and 2) Wearable sensing must be evaluated with respect to culturally situated bodies and orientation. Expanding on material-oriented design research, we proffer that the evaluation of eTextiles must also be material-led and cannot be decontextualized and must be grounded within a soma-aware and situated context.
Liminal Space: A Performance with RaveNET
Rachel Freire, Valentin Martinez-Missir, Courtney N. Reed, and 1 more author
In Proceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction, Feb 2024
We present our musical performance exploration of liminal spaces, which focuses on the interconnected physicality of bodies in music, using biosignals and gestural, movement-based interaction to shape live performances in novel ways. Physical movement is important in structuring performance, providing cues across musical ensembles, and non-verbally informing other musicians of intention. This is especially true for improvised work. Our performance involves the use of our musicking bodies to modulate audio signals. Three bespoke wearable nodes modulate the performance through control voltages (CV) and interface with specific technical aspects of our instruments and techniques: 1) an “anti-corset” that measures the expansion and resistance of Reed’s abdomen while singing, 2) an augmented glove that assists Strohmeier’s bass/guitar signal routing across his pedal board and modular setup, and 3) a cap-like device that captures Martinez-Missir’s subtle facial expressions as he manipulates his modular synthesizer and drum machine setup. Through these performances we explore the notion of control in musical improvised performance, the interconnectedness and communications between our ensemble as we learn to collaborate and interpret each others’ bodies in this novel interaction.
RaveNET: Connecting People and Exploring Liminal Space through Wearable Networks in Music Performance
Rachel Freire, Valentin Martinez-Missir, Courtney N. Reed, and 1 more author
In Proceedings of the Eighteenth International Conference on Tangible, Embedded, and Embodied Interaction, Feb 2024
RaveNET connects people to music, enabling musicians to modulate sound using signals produced by their own bodies or the bodies of others. We present three wearable prototype nodes in an inaugural RaveNET performance: Bones, an anti-corset, uses capacitive sensing to detect stretch as the singer breathes. Tendons, a half-glove, measures galvanic skin response, pulse, and movement of the bass player’s hands. Veins, a cap with electrodes for surface electromyography, captures the facial expressions of the drum machine operator. These signals are filtered, normalized, and amplified to control voltage levels to modulate sound. Together, musicians and nodes form RaveNET and engage with shared liminal experiences. In designing these wearables and evaluating them in performance, we reflect on our creative processes, spaces between our different bodies, our presence and control within the network, and how this made us adapt our movements in order to be noticed and heard.
2023
A Guide to Evaluating the Experience of Media and Arts Technology
Nick Bryan-Kinns, and Courtney N. Reed
In Creating Digitally. Intelligent Systems Reference Library, Dec 2023
Evaluation is essential to understanding the value that digital creativity brings to people’s experience, for example in terms of their enjoyment, creativity, and engagement. There is a substantial body of research on how to design and evaluate interactive arts and digital creativity applications. There is also extensive Human-Computer Interaction (HCI) literature on how to evaluate user interfaces and user experiences. However, it can be difficult for artists, practitioners, and researchers to navigate such a broad and disparate collection of materials when considering how to evaluate technology they create that is at the intersection of art and interaction. This chapter provides a guide to designing robust user studies of creative applications at the intersection of art, technology and interaction, which we refer to as Media and Arts Technology (MAT). We break MAT studies down into two main kinds: proof-of-concept and comparative studies. As MAT studies are exploratory in nature, their evaluation requires the collection and analysis of both qualitative data such as free text questionnaire responses, interviews, and observations, and also quantitative data such as questionnaires, number of interactions, and length of time spent interacting. This chapter draws on over 20 years of experience of designing and evaluating novel interactive systems to provide a concrete template on how to structure a study to evaluate MATs that is both rigorous and repeatable, and how to report study results that are publishable and accessible to a wide readership in art and science communities alike.
Do You Hear What I Hear?
Simin Yang, Mathieu Barthet, Courtney N. Reed, and 1 more author
This installment of Computer’s series highlighting the work published in IEEE Computer Society journals comes from IEEE Transactions on Affective Computing. Musical performance is often described as expressing emotion. However, the human perception of emotion in music is not well understood. The studies by Yang et al. examine listeners’ emotional perception over time to a performance of a single musical piece experienced in live concert conditions, and in the lab, through video recordings. The authors aimed to find out the following: What level of agreement exists between listeners of the same performance? How are perceived emotions related to the semantic features of the music (expressible in linguistic terms) and to machine-extractable music features? What aspects of the music itself and of the listener, like music expertise, influence perceived emotions?
Triangle Simplex Plots for Representing and Classifying Heart Rate Variability
Mateusz Soliński, Courtney N. Reed, and Elaine Chew
In 2023 Computing in Cardiology Conference (CinC), Oct 2023
Simplex plots afford barycentric mapping and visualisation of the ratio of three variables, summed to a constant, as positions in an equilateral triangle (2-simplex); for instance, time distribution in three-interval musical rhythms. We propose a novel use of simplex plots to visualise the balance of autonomic variables and classification of autonomic states during baseline and music performance. RR interval series extracted from electrocardiographic (ECG) traces were collected from a musical trio (pianist, violinist, cellist) in a baseline (5 min) and music performance (\sim10 min) condition. Schubert’s Trio Op. 100, \textitAndante con moto was performed in nine rehearsal sessions over five days. Each RR interval series’ very low (VLF), low (LF), and high (HF) frequency component power values, calculated in 30 sec windows (hop size 15 sec), were normalised to 1 and visualised in triangle simplex plots. Spectral clustering was used to cluster data points for baseline and music conditions. We correlated the accuracy between the clustered and true values. Strong negative correlation was observed for the violinist (r = –0.80, p ≤.01, accuracy range: [0.64, 0.94]) and pianist (r = –0.62, p = .073, [0.64, 0.80]), suggesting adaptation of their cardiac response (reduction between baseline and performance) over the performances; a weakly negative, non-significant correlation was observed for the cellist (r = –0.23, p = .545, [0.50, 0.61]), indicating similarity between baseline and performance over time. Using simplex plots, we were able to effectively represent VLF, LF and HF ratios and track changes in autonomic response over a series of music rehearsals to observe autonomic states and changes over time.
Time Delay Stability Analysis of Pairwise Interactions Amongst Ensemble-Listener RR Intervals and Expressive Music Features
Mateusz Soliński, Courtney N. Reed, and Elaine Chew
In 2023 Computing in Cardiology Conference (CinC), Oct 2023
Time Delay Stability (TDS) can reveal physiological function and states in networked organs. Here, we introduce a novel application of TDS to a musical setting to study interactions between RR intervals of ensemble musicians and a listener, and music properties. Three musicians performed a movement from Schubert’s Trio Op. 100 nine times in the company of one listener. Their RR intervals were collected during baseline (5 min, silence) and performances (\sim10 min each). Loudness and tempo were extracted from recorded music audio. Regions of stable optimal time delay were identified during baseline and music, shuffled data, and data pairs from incongruent recordings. Bootstrapping was employed to obtain mean TDS probabilities (calculated based on all performances). A significant difference in mean TDS probability between music and baseline is observed for all musician pairs (p<.001) and for cello-listener (p=.025); mean TDS probability being greater during music. A significant decrease in mean TDS probability was observed for piano-violin (p<.001), violin-tempo (p=.045), and cello-tempo (p<.001) for incongruent pairs. The highest inter-musician TDS probabilities were observed in musically tense sections: the final climax before the music dies down for the ending and mid piece in a suspenseful swell. This framework offers a promising way to track dynamic RR interval interactions between people engaged in a shared activity, and, in this musical activity, between the people and music properties.
Querying Experience with Musical Interaction
Courtney N. Reed, Eevee Zayas-Garin, and Andrew McPherson
In Proceedings of the International Conference on New Interfaces for Musical Expression, May 2023
With this workshop, we aim to bring together researchers with the common interest of querying, articulating and understanding experience in the context of New Interfaces for Musical Expression, and to jointly identify challenges, methodologies and opportunities in this space. Furthermore, we hope it serves as a platform for strengthening the community of researchers working with qualitative and phenomenological methods around the design of DMIs and HCI applied to musical interaction.
Negotiating Experience and Communicating Information Through Abstract Metaphor
Courtney N. Reed, Paul Strohmeier, and Andrew P. McPherson
In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Apr 2023
An implicit assumption in metaphor use is that it requires grounding in a familiar concept, prominently seen in the popular Desktop Metaphor. In human-to-human communication, however, abstract metaphors, without such grounding, are often used with great success. To understand when and why metaphors work, we present a case study of metaphor use in voice teaching. Voice educators must teach about subjective, sensory experiences and rely on abstract metaphor to express information about unseen and intangible processes inside the body. We present a thematic analysis of metaphor use by 12 voice teachers. We found that metaphor works not because of strong grounding in the familiar, but because of its ambiguity and flexibility, allowing shared understanding between individual lived experiences. We summarise our findings in a model of metaphor-based communication. This model can be used as an analysis tool within the existing taxonomies of metaphor in user interaction for better understanding why metaphor works in HCI. It can also be used as a design resource for thinking about metaphor use and abstracting metaphor strategies from both novel and existing designs.
Tactile Symbols with Continuous and Motion-Coupled Vibration: An Exploration of Using Embodied Experiences for Hermeneutic Design
Nihar Sabnis, Dennis Wittchen, Gabriela Vega, and 2 more authors
In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Apr 2023
With most digital devices, vibrotactile feedback consists of rhythmic patterns of continuous vibration. In contrast, when interacting with physical objects, we experience many of their material properties through vibration which is not continuous, but dynamically coupled to our actions. We assume the first style of vibration to lead to hermeneutic mediation, while the second style leads to embodied mediation. What if both types of mediation could be used to design tactile symbols? To investigate this, five haptic experts designed tactile symbols using continuous and motion-coupled vibration. Experts were interviewed to understand their symbols and design approach. A thematic analysis revealed themes showing that lived experience and affective qualities shaped design choices, that experts optimized for passive or active symbols, and that they considered context as part of the design. Our study suggests that adding embodied experiences as a design resource changes how participants think of tactile symbol design, thus broadening the scope of the symbol by design for context, and expanding their affective repertoire as changing the type of vibration influences perceived valence and arousal.
Haptic Servos: Self-Contained Vibrotactile Rendering System for Creating or Augmenting Material Experiences
Courtney N. Reed*, Nihar Sabnis*, Dennis Wittchen*, and 3 more authors
In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Apr 2023
When vibrations are synchronized with our actions, we experience them as material properties. This has been used to create virtual experiences like friction, counter-force, compliance, or torsion. Implementing such experiences is non-trivial, requiring high temporal resolution in sensing, high fidelity tactile output, and low latency. To make this style of haptic feedback more accessible to non-domain experts, we present Haptic Servos: self-contained haptic rendering devices which encapsulate all timing-critical elements. We characterize Haptic Servos’ real-time performance, showing the system latency is <5 ms. We explore the subjective experiences they can evoke, highlighting that qualitatively distinct experiences can be created based on input mapping, even if stimulation parameters and algorithm remain unchanged. A workshop demonstrated that users new to Haptic Servos require approximately ten minutes to set up a basic haptic rendering system. Haptic Servos are open source, we invite others to copy and modify our design.
As the Luthiers Do: Designing with a Living, Growing, Changing Body-Material
Through soma-centric research, we see the different interaction roles of our bodies: they are the locus of our experience, a conduit for our expression and engagement, a sensor of feedback in the world, and a collaborator in our interaction with it. More" traditional" examinations of the body might look at control over it; for instance, in my research around vocal embodiment, I see many teachers and practitioners alike talking about how we can maintain control over the body. However, bodies are living, inconsistent, and typically weird. In reality, we do not have as much control over them as we would like or think we do. In this position paper, I will touch on my research around vocal physiology and sonified and vibrotactile feedback as I frame our role in a new light—designers as Body Luthiers, who must address the body as a material with inconsistencies, flaws, and variability, and work with it as a partner, embracing its uniqueness and changeability.
Designing Interactive Shoes for Tactile Augmented Reality
Dennis Wittchen, Valentin Martinez-Missir, Sina Mavali, and 3 more authors
In Proceedings of the Augmented Humans International Conference 2023, Mar 2023
Augmented Footwear has become an increasingly common research area. However, as this is a comparatively new direction in HCI, researchers and designers are not able to build upon common platforms. We discuss the design space of shoes for augmented tactile reality, focussing on physiological and biomechanical factors as well as technical considerations. We present an open source example implementation from this space, intended as an experimental platform for vibrotactile rendering and tactile AR and provide details on experiences that could be evoked with such a system. Anecdotally, the new prototype provided experiences of material properties like compliance, as well as altered perception of their movements and agency. We intend our work to lower the barrier of entry for new researchers and to support the field of tactile rendering in footwear in general by making it easier to compare results between studies.
The Body as Sound: Unpacking Vocal Embodiment through Auditory Biofeedback
Courtney N. Reed, and Andrew P. McPherson
In Proceedings of the Seventeenth International Conference on Tangible, Embedded, and Embodied Interaction, Feb 2023
Multi-sensory experiences underpin embodiment, whether with the body itself or technological extensions of it. Vocalists experience intensely personal embodiment, as vocalisation has few outwardly visible effects and kinaesthetic sensations occur largely within the body, rather than through external touch. We explored this embodiment using a probe which sonified laryngeal muscular movements and provided novel auditory feedback to two vocalists over a month-long period. Somatic and micro-phenomenological approaches revealed that the vocalists understand their physiology through its sound, rather than awareness of the muscular actions themselves. The feedback shaped the vocalists’ perceptions of their practice and revealed a desire for reassurance about exploration of one’s body when the body-as-sound understanding was disrupted. Vocalists experienced uncertainty and doubt without affirmation of perceived correctness. This research also suggests that technology is viewed as infallible and highlights expectations that exist about its ability to dictate success, even when we desire or intend to explore.
Being Meaningful: Weaving Soma-Reflective Technological Mediations into the Fabric of Daily Life
Alice Haynes, Courtney N. Reed, Charlotte Nordmoen, and 1 more author
In Proceedings of the Seventeenth International Conference on Tangible, Embedded, and Embodied Interaction, Feb 2023
A one-size-fits-all design mentality, rooted in objective efficiency, is ubiquitous in our mass-production society. This can negate peoples’ experiences, bodies, and narratives. Ongoing HCI research proposes design for meaningful relations; but for many researchers, the practical implementation of these philosophies remains somewhat intangible. In this Studio, we playfully tackle this space by engaging with the nuances of soft, flexible, and organic materials, collectively designing probes to embrace plurality, embody meaning, and encourage soma-reflection. Focusing on materiality and practices from e-textiles, soft robotics, and biomaterials research, we address technology’s role as a mediator of our experiences and determiner of our realities. The processes and probes developed in this Studio will serve as an experiential manifesto, providing practitioners with tools to deepen their own practices for designing soma-reflective tangible and embodied interaction. The Studio will form the first steps for ongoing collaboration, focusing on bespoke design and curation of meaningful, personal relationships.
Imagining & Sensing: Understanding and Extending the Vocalist-Voice Relationship Through Biosignal Feedback
Courtney N. Reed
PhD Computer Science, Queen Mary University of London, Feb 2023
The voice is body and instrument. Third-person interpretation of the voice by listeners, vocal teachers, and digital agents is centred largely around audio feedback. For a vocalist, physical feedback from within the body provides an additional interaction. The vocalist’s understanding of their multi-sensory experiences is through tacit knowledge of the body. This knowledge is difficult to articulate, yet awareness and control of the body are innate. In the ever-increasing emergence of technology which quantifies or interprets physiological processes, we must remain conscious also of embodiment and human perception of these processes. Focusing on the vocalist-voice relationship, this thesis expands knowledge of human interaction and how technology influences our perception of our bodies. To unite these different perspectives in the vocal context, I draw on mixed methods from cognitive science, psychology, music information retrieval, and interactive system design. Objective methods such as vocal audio analysis provide a third-person observation. Subjective practices such as micro-phenomenology capture the experiential, first-person perspectives of the vocalists themselves. Quantitative-qualitative blend provides details not only on novel interaction, but also an understanding of how technology influences existing understanding of the body.
2022
Exploring Experiences with New Musical Instruments through Micro-phenomenology
Courtney N. Reed, Charlotte Nordmoen, Andrea Martelloni, and 6 more authors
In Proceedings of the International Conference on New Interfaces for Musical Expression, Jun 2022
This paper introduces micro-phenomenology, a research discipline for exploring and uncovering the structures of lived experience, as a beneficial methodology for studying and evaluating interactions with digital musical instruments. Compared to other subjective methods, micro-phenomenology evokes and returns one to the moment of experience, allowing access to dimensions and observations which may not be recalled in reflection alone. We present a case study of five microphenomenological interviews conducted with musicians about their experiences with existing digital musical instruments. The interviews reveal deep, clear descriptions of different modalities of synchronic moments in interaction, especially in tactile connections and bodily sensations. We highlight the elements of interaction captured in these interviews which would not have been revealed otherwise and the importance of these elements in researching perception, understanding, interaction, and performance with digital musical instruments.
Communicating Across Bodies in the Voice Lesson
Courtney N. Reed
In ACM CHI Workshop on Tangible Interaction for Well-Being, Apr 2022
In this position paper, I would like to introduce my research on vocalists and their relationships with their bodies, and how the use of haptic feedback can improve these connections and the way we communicate sensory experience. I use the voice lesson and vocal performance as an environment to understand more broadly how people perceive very refined movements which they feel internally. My research seeks to understand how we communicate these sensory experiences in human-to-human interaction and how we can augment or communicate sensory experience through technology. I examine perception of these experiences through different feedback modalities, namely auditory and haptic feedback. Providing new ways to communicate our sensory experiences can lead to improvements in understanding between two individuals (for instance teacher and student). In virtual singing lessons, where the majority of voice study is being done in early 2022, this is especially important, as many of the common ways of interacting with the voice have disappeared with the transition to online interaction.
Sensory Sketching for Singers
Courtney N. Reed
In ACM CHI Workshop on Sketching Across the Senses, Apr 2022
This position paper outlines my study of vocalists and the relationships with the voice as both instrument and part of the body. I study this embodiment through a phenomenological perspective, employing somaesthetics and micro-phenomenology to explore the tacit relationships that singers have with their body. While verbal metaphor is traditionally used to articulate experience in teaching voice, I also use body mapping and material speculation to help articulate tactile and auditory experiences while singing.
Singing Knit: Soft Knit Biosensing for Augmenting Vocal Performances
Courtney N. Reed, Sophie Skach, Paul Strohmeier, and 1 more author
In Proceedings of the Augmented Humans International Conference 2022, Mar 2022
This paper discusses the design of the Singing Knit, a wearable knit collar for measuring a singer’s vocal interactions through surface electromyography. We improve the ease and comfort of multi-electrode bio-sensing systems by adapting knit e-textile methods. The goal of the design was to preserve the capabilities of rigid electrode sensing while addressing its shortcomings, focusing on comfort and reliability during extended wear, practicality and convenience for performance settings, and aesthetic value. We use conductive, silver-plated nylon jersey fabric electrodes in a full rib knit accessory for sensing laryngeal muscular activation. We discuss the iterative design and the material decision-making process as a method for building integrated soft-sensing wearable systems for similar settings. Additionally, we discuss how the design choices through the construction process reflect its use in a musical performance context.
Vibro-Touch
Courtney N. Reed, and Nihar Sabnis
In ACM TEI Studio “How Tangible is TEI?” Exploring Swatches as a New Academic Publication Format, Feb 2022
In our research, we examine tactile representations which are used for user interaction and system notifications. This swatch works as an interface to store and playback vibrotactile stimuli. This allows for easy, cost effective (\~€20) reproduction and exploration of tactile feedback. Typically, feedback designed and used in research is described through written formats. This swatch provides a companion to physically experience the vibrations. The tactile sensation is stored on a microcontroller and played back through a speaker which works as an actuator. The swatch could be given to others to test and experience different sensations in a simplified, modular format. For instance, rather than redesigning feedback for each new study, the tactile feedback could be shared and reproduced for new research. The code on the microcontroller can be changed or updated to have multiple "swatches" in one. In other applications, the swatch could also play audio feedback.
Examining Embodied Sensation and Perception in Singing
Courtney N. Reed
In Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction, Feb 2022
This paper introduces my PhD research on the relationship which vocalists have with their voice. The voice, both instrument and body, provides a unique perspective to examine embodied practice. The interaction with the voice is largely without a physical interface and it is difficult to describe the sensation of singing; however, voice pedagogy has been successful at using metaphor to communicate sensory experience between student and teacher. I examine the voice through several different perspectives, including experiential, physiological, and communicative interactions, and explore how we convey sensations in voice pedagogy and how perception of the body is shaped through experience living in it. Further, through externalising internal movement using sonified surface electromyography, I aim to give presence to aspects of vocal movement which have become subconscious or automatic. The findings of this PhD will provide understanding of how we perceive the experience of living within the body and perform through using the body as an instrument.
2021
Exploring XAI for the Arts: Explaining Latent Space in Generative Music
Nick Bryan-Kinns, Berker Banar, Corey Ford, and 4 more authors
In 1st Workshop on eXplainable AI Approaches for Debugging and Diagnosis (XAI4Debugging@NeurIPS2021), Dec 2021
Explainable AI has the potential to support more interactive and fluid co-creative AI systems which can creatively collaborate with people. To do this, creative AI models need to be amenable to debugging by offering eXplainable AI (XAI) features which are inspectable, understandable, and modifiable. However, currently there is very little XAI for the arts. In this work, we demonstrate how a latent variable model for music generation can be made more explainable; specifically we extend MeasureVAE which generates measures of music. We increase the explainability of the model by: i) using latent space regularisation to force some specific dimensions of the latent space to map to meaningful musical attributes, ii) providing a user interface feedback loop to allow people to adjust dimensions of the latent space and observe the results of these changes in real-time, iii) providing a visualisation of the musical attributes in the latent space to help people understand and predict the effect of changes to latent space dimensions. We suggest that in doing so we bridge the gap between the latent space and the generated musical outcomes in a meaningful way which makes the model and its outputs more explainable and more debuggable.
The Role of Auditory Imagery and Altered Auditory Feedback in Singers’ Timing Accuracy
Courtney N. Reed, Andrew P. McPherson, and Marcus T. Pearce
In Proceedings of the Joint 16th International Conference on Music Perception and Cognition (ICMPC) and 11th Triennial Conference of the European Society for the Cognitive Science of Music (ESCOM), Jul 2021
Auditory imagery allows musicians to recall mental representations of sound and has been linked to better sensorimotor coordination, effective gestural communication with other performers, and the ability to perform with timing accuracy even when auditory feedback is disrupted. The predominance of auditory imagery in the multimodal relationships driving internal temporal models however remains unclear. This study explores how singers adapt to altered auditory feedback (AAF) using auditory imagery. We examine whether auditory imagery ability, measured using the Bucknell Auditory Imagery Scale, affects singers’ ability to maintain temporal accuracy when singing and audiating with AAF and explore the significance of auditory imagery on timing and its role in multimodal imagery. Additionally, we focus on how imagery benefits musicians specifically, comparing timing error in a group of skilled performers.
Examining Emotion Perception Agreement in Live Music Performance
Simin Yang, Courtney N. Reed, Elaine Chew, and 1 more author
IEEE Transactions on Affective Computing, Jun 2021
Current music emotion recognition (MER) systems rely on emotion data averaged across listeners and over time to infer the emotion expressed by a musical piece, often neglecting time- and listener-dependent factors. These limitations can restrict the efficacy of MER systems and cause misjudgements. We present two exploratory studies on music emotion perception. First, in a live music concert setting, fifteen audience members annotated perceived emotion in the valence-arousal space over time using a mobile application. Analyses of inter-rater reliability yielded widely varying levels of agreement in the perceived emotions. A follow-up lab-based study to uncover the reasons for such variability was conducted, where twenty-one participants annotated their perceived emotions whilst viewing and listening to a video recording of the original performance and offered open-ended explanations. Thematic analysis revealed salient features and interpretations that help describe the cognitive processes underlying music emotion perception. Some of the results confirm known findings of music perception and MER studies. Novel findings highlight the importance of less frequently discussed musical attributes, such as musical structure, performer expression, and stage setting, as perceived across audio and visual modalities. Musicians are found to attribute emotion change to musical harmony, structure, and performance technique more than non-musicians. We suggest that accounting for such listener-informed music features can benefit MER in helping to address variability in emotion perception by providing reasons for listener similarities and idiosyncrasies.
Surface Electromyography for Sensing Performance Intention and Musical Imagery in Vocalists
Courtney N. Reed, and Andrew P. McPherson
In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction, Feb 2021
Through experience, the techniques used by professional vocalists become highly ingrained and much of the fine muscular control needed for healthy singing is executed using well-refined mental imagery. In this paper, we provide a method for observing intention and embodied practice using surface electromyography (sEMG) to detect muscular activation, in particular with the laryngeal muscles. Through sensing the electrical neural impulses causing muscular contraction, sEMG provides a unique measurement of user intention, where other sensors reflect the results of movement. In this way, we are able to measure movement in preparation, vocalised singing, and in the use of imagery during mental rehearsal where no sound is produced. We present a circuit developed for use with the low voltage activations of the laryngeal muscles; in sonification of these activations, we further provide feedback for vocalists to investigate and experiment with their own intuitive movements and intentions for creative vocal practice.
2020
Surface Electromyography for Direct Vocal Control
Courtney N. Reed, and Andrew McPherson
In Proceedings of the International Conference on New Interfaces for Musical Expression, Jul 2020
This paper introduces a new method for direct control using the voice via measurement of vocal muscular activation with surface electromyography (sEMG). Digital musical interfaces based on the voice have typically used indirect control, in which features extracted from audio signals control the parameters of sound generation, for example in audio to MIDI controllers. By contrast, focusing on the musculature of the singing voice allows direct muscular control, or alternatively, combined direct and indirect control in an augmented vocal instrument. In this way we aim to both preserve the intimate relationship a vocalist has with their instrument and key timbral and stylistic characteristics of the voice while expanding its sonic capabilities. This paper discusses other digital instruments which effectively utilise a combination of indirect and direct control as well as a history of controllers involving the voice. Subsequently, a new method of direct control from physiological aspects of singing through sEMG and its capabilities are discussed. Future developments of the system are further outlined along with usage in performance studies, interactive live vocal performance, and educational and practice tools.
2019
Effects of Musical Stimulus Habituation and Musical Training on Felt Tension
Courtney N. Reed, and Elaine Chew
In Late Breaking/Demo at the Twentieth International Society for Music Information Retrieval Conference (ISMIR), Nov 2019
As a listener habituates to a stimulus, its impact is expected to decrease over time; this research investigates the impact of repetitions and time on felt tension. Farbood describes tension increase as “a feeling of rising intensity or impending climax” and decrease as “a feeling of relaxation or resolution.” Musical tension has been linked to structural properties of music such as chord movements, tonality, and section boundaries; these connections have in turn informed the design of quantitative models for musical tension. In a pilot study, 9 participants annotated their felt tension through a recorded live performance of Chopin’s Ballade No. 2 in F Maj, Op. 38, and a collage of the ballade from the Arrhythmia Suite by Chew et al. The piece includes a calm triplet motif and a tense foil with frequent dissonance and variable features. The difference in felt tension over time for musicians and non-musicians is found to be significant at 5%. In a comparison of means t-test, H0 : μ1 = μ2 is rejected at 7 DOF (t = −2.580, p = 0.0365). In musicians, the range of annotated values is greater (68.46, non-musicians = 61.75), while mean tension over time is lower (22.33, non-musicians = 35.24), suggesting that musicians are more aware of the full emotional range of their felt tension and supports the heightened response of musicians to different musical stimuli.
2018
Interactions between felt emotion and musical change points in live music performance
Courtney N. Reed
MSc Computer Science, Queen Mary University of London, Aug 2018
This thesis in music cognition focuses on investigations of the felt emotions that arise through a listener‘s response to musical change. The research conducted aims to connect felt emotional response to tension and pleasure in musical signatures, with a focus on qualitative categorization of participant responses. Participants indicated their felt emotional responses to three recorded pieces of music previously performed for them in a live setting. The focus of the research is on felt emotion, which is relatively unexplored compared to perceived emotion. Participants annotated their overall emotion (in valence-arousal dimensions) and tension through the music. They also annotated where they felt transition points and sudden changes to the overall emotional quality of the piece occurred. Interactions between felt emotions and musical features, including loudness, tempo, and harmonic and melodic tension, were defined over the length of the performance in both a pedagogical examination and quantitative analysis with the use of music information retrieval software. The conclusions of this study will provide a basis for future music cognition study, especially in medical research of electrophysiological effects of mental stress on the heart and brain.