ICMC BOSTON 2025
Installations
50th Anniversary International Computer Music Conference
June 8-14, 2025
ICMC Boston 2025: Installations
The installation track of ICMC 2025 received 56 submissions from artists and researchers across 12 countries, showcasing the expanding boundaries of sound art and interactive media. In alignment with this year’s theme of “Curiosity, Play, Innovation,” we accepted 19 installations that transform Emerson College’s Media Art Gallery and Bright Family Screening Room into laboratories of sonic exploration. The selected works span a rich spectrum of approaches—from immersive audiovisual environments and interactive sound sculptures to spatial audio experiences and video installations—each pushing the envelope of how we experience and interact with sound in physical space. Particularly noteworthy is the diversity of artistic practices represented, with creators employing everything from AI-driven systems and sensor-based interactions to acoustic phenomena and architectural resonances. Many works exemplify the democratization of technology that David Wessel championed, utilizing accessible tools and gallery-provided equipment to ensure that resource constraints don’t limit innovative artistic expression. We extend our gratitude to the 35 reviewers who carefully evaluated each submission through the double-blind process, considering not only artistic merit but also technical feasibility and the unique spatial requirements of installation art. These installations, accessible throughout the conference, invite participants to move beyond traditional concert hall experiences and engage with sound as a sculptural, architectural, and deeply interactive medium.
Please join us Wednesday evening for a Mini-Festival from 5:30 – 10:30pm at Emerson College — an evening celebrating the 2025 ICMC Gallery Installations, Screenings, & Soundwalks! The primary reception will take place in the Bright Family Screening Room Lobby at Paramount Center (559 Washington Street) from 5:30pm – 9:30pm with free alcoholic and non-alcoholic drinks and celebratory snacks. There will also be a special Gallery Opening at the Media Art Gallery (25 Avery Street – a 2 minute walk from Paramount Center) from 6-10pm. Have a drink, enjoy the screenings, grab a snack, and check out the gallery exhibition and soundwalks!
Installations: Screenings
All of the Installation works being shown in the Bright Family Screening Room will be screened twice on Wednesday 6/11 — first from 6-8 PM and a second time from 8:30-10:30 PM. Screening 1 will be identical to Screening 2.
Bright Family Screening Room
Emerson College, 559 Washington St.
ID
Installation Title
Author(s)
31
ID 31
Resilience in Color: Sonic Portrayals of Women's Resistance
Resilience in Color: Sonic Portrayals of Women’s Resistance is an audiovisual installation project that translates the visual iconography of women’s global sociopolitical movements into an immersive audiovisual experience. It leverages the concept of RGB color distance to recontextualize powerful images of women exhibiting extraordinary resilience and courage against police brutality, dictatorships, censorship, and human-rights violations, aiming to amplify these acts of defiance.
The process begins with these images of defiance. A target color, often emblematic of the protest itself or inspired by news titles and prominent visual themes, is pre-selected for each image. Custom software then scans each selected image pixel-by-pixel, from left to right and top to bottom. The Euclidean distance between each pixel’s RGB value and the target color is calculated; if this distance falls within a defined threshold, the pixel is considered a match. To provide ongoing visual feedback, a marker glides over the image, indicating the scanner’s live position. Simultaneously, a duplicative visual representation is generated beneath the original image. This is accomplished by selectively drawing colored points in a dedicated space which corresponds to pixels that meet the color criteria. Specifically, when the calculated distance between the current pixel’s color and the target color falls within the threshold.
The sonic core of the project is built upon formant synthesis, chosen to capture the spirit of vocal determination inherent in acts of resistance and to evoke the human voice, creating a soundscape that forms a chorus of resilience. The data derived from the scan, specifically the Euclidean distance of a pixel’s color to the target color, directly influences parameters such as the amplitude and formant change of specific voices within a multi-part vocal composition. To maintain the prominence of these voices and symbolize unyielding persistence, pixels that deviate significantly from the target color trigger a sequenced kick drum sound. This interplay between the presence of matching colors (represented by vocal sounds) and their perceived absence (represented by the rhythmic pulse) forms a narrative chorus. Furthermore, for certain images depicting isolated figures of resistance, such as “Women in Red” or “The Girl In The Blue Bra,” a delay effect is implemented, allowing the data from these distinct images to influence each other’s delay parameters, sonically weaving a web of solidarity.
Resilience in Color is adaptable in its presentation, designed to be experienced over varying durations that invite audiences to come and go freely, engaging as they wish. The adjustable nature of the scanning speed allows for creating temporal dynamics between the images, potentially condensing or expanding the experience from desired minutes to days.
It premiered as a 5-channel, 8-hour-long installation at ICAD (International Conference on Auditory Display) 2024 at the EMPAC (Experimental Media and Performing Arts Center) Studio 2, Rensselaer Polytechnic Institute, Troy, NY [6]. It was also installed as a stereo, 24-hour-long installation at SEAMUS (Society for Electro-Acoustic Music in the United States) 2025 at Yue-Kong Pao Hall, Acting Studio I, Purdue University, West Lafayette, IN. At ICMC (International Computer Music Conference) 2025, a 15-minute fixed media installation for the Bright Family Screening Room at Emerson College, Boston, MA, will be presented. In this shorter version, the scanning speed is adjusted, ranging from 10 milliseconds per pixel to 120, allowing for temporal dynamics that can condense or expand the experience. This variability creates interesting sonic textures, from subtle formant changes at slower speeds to faster transitions at higher speeds, offering a condensed yet immersive experience of the work’s essence.
This work features a curated selection of five images, each captures a moment of women actively resisting political oppression around the world. These images serve as the source material for the sonification. List of images used in the project from left to right:
- Women in Red (Turkey)
Photo Credit: Osman Orsal/Reuters [2]
The image, captured in Istanbul, Turkey during a moment of violence at the 2013 Gezi Park protests, shows a woman in a red dress protesting as a masked policeman discharges tear gas directly at her. Gezi Park Protests lasted between 28 May and 20 August 2013. - The Girl In The Blue Bra (Egypt)
Photo Credit: Reuters [5]
The image, captured in Cairo’s Tahrir Square in Egypt during a moment of violence in 2011, shows a woman being dragged and beaten by the Egyptian military. On the cusp of a soldier’s kick, her starkly exposed torso is clad only in a bright-blue bra. - Green Scarf Movement (Argentina)
Photo Credit: Brandon Bell/Getty Images [4]
The image, captured in Argentina during reproductive rights protests, shows a group of activists holding triangle- shaped green scarves above their heads as they stand in solidarity. - Pink PussyHats (USA)
Photo Credit: Carolyn Cole/Los Angeles Times/TNS [3]
The image, captured in the USA during the 2018 Women’s March, shows activists wearing pink beanies, also known as Pink PussyHats. The protest focused on voter registration and electing women and progressive candidates. - Yellow Scarves (India)
Photo Credit: Danish Siddiqui/Reuters [1]
The image, captured in India on International Women’s Day in 2021, shows women activists clad in bright yellow scarves, which are symbolic of mustard fields, joining mass sit-ins and hunger strikes on the outskirts of New Delhi. Their protest rallied against agricultural reforms that they feared would compromise their livelihoods.
Zeynep Özcan

Dr. Zeynep Özcan is an experimental and electronic music composer, sound artist and performer. She holds a Ph.D. in Music from Istanbul Technical University, an M.A. in History of Architecture, and a B.A. in Philosophy from Middle East Technical University. She explores biologically inspired musical creativity, interactive and immersive environments, and generative systems. Her works have been performed and presented throughout the world in concerts, exhibitions, and conferences, such as AES, ICAD, ICMC, ISEA, NIME, NYCEMF, WAC and ZKM. She is an Assistant Professor and a faculty director of GiMaT (Girls in Music and Technology) Summer Institute at the Department of Performing Arts Technology at the University of Michigan, School of Music, Theatre & Dance. She specializes in the implementation of software systems for music-making, audio programming, sonification, sensors and microcontrollers, novel interfaces for musical expression, and large-scale interactive installations. Her research interests are exploring bio-inspired creativity, artistic sonification, playfulness and failure in performance, and activist art. She is passionate about community building and creating multicultural and collaborative learning experiences for students through technology-driven creativity.
For many years, I have been deeply inspired by the extraordinary courage of women who stand at the forefront of global sociopolitical change. Attending the Gezi Park protests in 2013 profoundly affected me, particularly the visual symbolism of resistance embodied by the iconic ’Lady in Red.’ My work, Resilience in Color: Sonic Portrayals of Women’s Resistance, emerged from a personal desire to translate the unwavering spirit of these women, who face political repression and human rights violations, into a tangible and resonant experience. Through this project, I aim to transform their visual power of resistance into an immersive soundscape that amplifies their voices and narratives. The intention of my work is not to replicate the trauma captured in these images but to celebrate the strength and indomitable spirit of these women. The sonification process deliberately avoids jarring or abrupt sounds, focusing instead on creating textures that convey a sense of unified and continuous resistance. The profoundly motivating feedback I have received from individuals, particularly those from the represented countries who were moved by this portrayal, encourages me to expand the project. I am inspired to incorporate more images from other countries, where color serves as a symbol of protest beyond the five initially used. Resilience in Color seeks to capture and reflect this collective emotional energy, offering a sonic testament to the power of women’s resistance. It contributes to a broader dialogue on activist art through the use of color data mapping.
98
ID 98
ludus vocalis
ludus vocalis (2024), a 25-minute multimedia work for eight-channel audio and 4K video that explores and reimagines paralinguistic vocal sounds as musical objects—such as laughing, crying, screaming, gasping, and moaning. These sounds carry unique emotional and largely cross-cultural semiotic qualities, prompting me to explore how their acoustic and affective properties could be harnessed in a musical context. As such, ludus vocalis is divided into sixteen short vignettes or movements, each focusing on a specific type of nonverbal vocalization. The source material for ludus vocalis was drawn from a moderately sized collection of vocal samples, manually curated and gathered from various online platforms, including Epidemic Sound (https://www.epidemicsound.com/), Splice (https://splice.com/), and, to a lesser extent, FreeSound (https://freesound.org/). These samples—586 audio files in WAV format, with an approximate duration of 40 minutes—were organized into distinct paralinguistic categories, and smaller, more focused audio corpora were created. Each corpus employed different segmentation strategies and audio analyses to facilitate deeper exploration of the source material. The video component was developed using AI- generated video, primarily created via text-prompts with RunwayML (https://runwayml.com/), as raw material. Broadly speaking, all generated videos were intended to represent mouths enacting the different kinds of vocalizations that each movement was based on, and the “failure” of the generative model to adhere to certain prompts was often embraced.
Felipe Tovar-Henao

Felipe Tovar-Henao is a US-based multimedia artist, developer, and researcher whose work explores computer algorithms as expressive tools for human and post-human creativity, cognition, and pedagogy. This has led him to work on a wide variety of projects involving digital instrument design, software development, immersive art installations, generative audiovisual algorithms, machine learning, music information retrieval, human-computer interaction, and more. His music is often motivated by and rooted in transformative experiences with technology, philosophy, and cinema, and it frequently focuses on exploring human perception, memory, and recognition. As a composer, he has been featured at a variety of international festivals and conferences, including TIME:SPANS, the International Computer Music Conference, the Mizzou International Composers Festival, the Ravinia Festival, the New York City Electroacoustic Music Festival, WOCMAT (Taiwan), CAMPGround, the Electroacoustic Barn Dance, CLICK Fest, the SCI National Conference, the SEAMUS National Conference, the Seoul International Computer Music Festival, CEMICircles, IRCAM’s CIEE Summer Contemporary Music Creation + Critique Program and ManiFeste Academy, Electronic Music Midwest, and the Midwest Composer Symposium. He has also been the recipient of artistic awards and distinctions, including the SCI/ASCAP Student Commission Award and the ASCAP Foundation Morton Gould Young Composer Award. His music has been performed by international artists and ensembles such as Alarm Will Sound, the Grossman Ensemble, Quatuor Diotima, the Contemporary Art Music Project, the New Downbeat Collective, NEXUS Chamber Music, Sound Icon, the IU New Music Ensemble, AURA Contemporary Ensemble, Hear no Evil, Sociedad de Música de Cámara de Bogotá, Ensamble Periscopio, Andrés Orozco-Estrada, and the Orquesta Sinfónica EAFIT, among many others. He has held research and teaching positions at various institutions, including the 2023/25 Charles H. Turner Postdoctoral Fellowship in Music Composition at the University of Cincinnati College-Conservatory of Music, the 2021/22 CCCC Postdoctoral Researcher at the University of Chicago, Lecturer in Music Theory and Composition at Universidad EAFIT, as well as Associate Instructor and Coordinator of the IU JSoM Composition Department. He was recently appointed as Assistant Professor of AI and Composition at the University of Florida.
201
ID 201
The Right Not (To Be)
The Right Not (To Be) features the voices of members of the Israeli anti-occupation activist organization Breaking the Silence, an anti-occupation Israeli activist organization whose work is focused on collecting and spreading testimonies of veterans from the Israel Defense Forces whose mandatory military service took place in the occupied territories, West Bank, and Gaza. In this work, I explore the complex relationship between belonging and historical narratives (or myths) that form a collective identity and highlight the difficulty in cultivating a political morality opposed to the status quo imposed by one’s state. The piece engages with questions about identity, the need to define good and bad, and the tendency to stop listening to others when their opinion or life experiences do not align perfectly with common and popular rhetoric and conduct.
Sonically, my engagement with speech and oral sounds takes on multiple forms, ranging from the setting of full spoken sentences, through electronic manipulation that emphasizes, distorts, and amplifies the relation between vocalizing and wording, and the creation of rhythmic and timbral textures out of the artifacts of speech; sounds produced by the mouth that don’t amount to the production of legible language, but are nonetheless parts of speech’s meaning-making process.
My conversation with Breaking the Silence began almost three years ago now, and I finally got to record some of their members in June of 2023. The conversation I had with them did not touch the military service directly, but the lived experience in Israel-Palestine, as it is perceived from Leftist Israeli eyes.
Needless to say, the reality of the region has shifted between June 2023 and now, and as a result, the piece sounds in different tones as well. Although the piece does not address the ongoing war in Gaza, it would be reckless to assume its perception can be dissociated from the current context of the region. The war has captured the interest of the international community, and yet, there have been limited attempts to comment on or engage with it, as well as the lived realities of Jewish Israelis, Israeli Arabs, and Palestinians in the region through an artistic medium since October 7th, 2023. In this current saturated moment in time, The Right Not (To Be) offers invaluable insights into the possibility of peace and the lived experience in Israel-Palestine, as it is perceived from Leftist Israeli eyes. the members of Breaking the Silence with whom I talked and whose voices are heard in the piece share a commitment to a peaceful future and a bettered, equal society. I hear a rare sense of hope in their voices, which I was committed to doing justice to in my compositional process.
Lee Gilboa
Lee Gilboa is a US-based Israeli composer, researcher, and audio engineer. Her creative work uses speech, audio spatialization, and vocal processing, and engages with different themes such as the sonic identity, representation, collectivity, and self-expression. These themes occupy her scholarly work as well. Her current research draws from sound studies, political theory, and Black studies. It examines the role that listening assumes in the socio-political sphere through a rigorous investigation of testifying voices. Since 2020, Lee has been a curator for the spatial audio concert series CT::SWaM, currently known as New Ear:: Spatial, where she collaborates with Daniel Neumann on the creation of listening spaces dedicated to multichannel audio works. Her work was released by labels such as Contour Editions and Surface World, and featured internationally in festivals and venues such as Experimental Intermedia, Roulette Intermedium, The Immersion Room at NYU, The Cube at Virginia Tech, Ars Electronica Forum Wallis Festival, and NYCEMF among others. She participated in several master classes and artist residencies internationally, including the Atlantic Center for the Arts, The Honk Tweet, IRCAM Manifeste Academy, and Elektronmusikstudion. Her writing was published in Resonance: The Journal of Sound and Culture, and her research was awarded fellowships from the Jerusalem Institute of Contemporary Music and the Cogut Institute for the Humanities. Currently, she is completing a Ph.D. in Music and Multimedia Composition at Brown University and serves as an Assistant Professor of Electronic Production and Design at Berklee College of Music.
483
ID 483
Coming Together: Cityscapes
Given a database of over 1000 audio and video recordings of 93 cities from 43 countries worldwide – over 28 hours of video – agents communicate with one another and negotiate their way through the archive to arrive at a single clip. As with other “Coming Together” works, it is the negotiation that is of artistic interest: the movement from apparent chaos to unity.
Five loudspeakers present the initially random audio recordings, granulated using a bespoke granular system, and heavily filtered through selected MFCC bands. Visually, a single screen displays five slices of the selected videos in greyscale, with separate hues highlighted, thereby analogous to the equalised audio recordings. Videos are drawn from parallel sources as the audio recordings.
The work has three distinct phases:
Part One: Convergence
The agents attempt to arrive at the same clips. They initially chose a random clip from the database, which is shuffled with each run. Each audio file has been analysed and segmented based on timbral novelty; a segment is initially chosen and is time-stretched and heavily filtered. Because audio is derived from the walks, the city sounds are stretched and amplified resulting in a noisy environment. Agents select prominent MFCC bands in the recordings, and “claim” these spectral regions – the number of bands an agent can claim depends upon how far along the clip negotiation has progressed. The frequency bands are initially quite wide (and thus noisy); as the agents converge, the bands narrow and the audio texture becomes more chordal. The partnered video agent similarly applies a selective colour filter based on hue to its video. The agent’s audio/video clip location is displayed, its vertical placement indicating its relative position in the database; during convergence, these graphics will begin to visually align horizontally.
Part Two: Alignment
The agents attempt to align their starting points. To do so, they shorten their gestures. As the agents attempt to align their starting points, more silence & black screen results. The continuous beat is displayed, with the agent’s current beat selection displayed in red. Edge detection is added to the agent video as it fades, slowly filling the blackness
Part Three: Extension
The agents have converged on their audio/video clips and their onsets; a frame from this portion of the video briefly covers the screen, only to fade away while being processed. The agents attempt to expand their segments again. Often, they get out of alignment during the process, as their lengths will differ, and play through the negotiated “downbeat”: this is not a bug, but an embraced feature. The cumulative audio of the agent’s multiple MFCC bands, slowly is replaced by a spectral freeze of this audio (mimicking the single video frame of the visuals) while the agent’s video deteriorates.
Arne Eigenfeldt, Simon Lysander Overstall
Arne Eigenfeldt is a composer of live electroacoustic music, and a researcher into intelligent generative music systems. His music has been performed around the world, and his collaborations range from Persian Tar masters to free improvisers to contemporary dance companies to musical robots. He has presented his research at major conferences and festivals, and published over 50 peer-reviewed papers on his research and collaborations. He is a professor of music and technology and an Associate Dean Academic at Simon Fraser University.
Simon Lysander Overstall is a computational media artist, and musician/composer from Vancouver, Canada. He develops works with generative, interactive, or performative elements. He is particularly interested in computational creativity in music, physics-based sound synthesis and performance in virtual environments, and biologically and ecologically inspired art and music systems. He has produced custom performance systems and interactive art installations that have been shown in Canada, the US, Europe, and China. He has also composed sound designs and music for dance, theatre, and installations. He has an MA in Sound in New Media at Aalto University in Helsinki, a BFA in Music Composition from the School for Contemporary Arts at Simon Fraser University, and an Associate in Music (Jazz) Diploma from Vancouver Island University.
642
ID 642
pOPPING bUBBLES oN jUPITER
A Max4Live visualizer patch was used to generate audio peaks when being fed audio from a VCV Rack software modular rack session.
Dave O Mahony
Dave O Mahony is a PhD graduate of the University of Limerick, Ireland.
His compositions have been performed at the Sines & Squares Festival (Manchester, UK) both 2014 and 2016, The Hilltown New Music Festival (Ireland), at the Daghda Gravity & Grace Festival (Limerick, Ireland), as part of the Society of Electro Acoustic Music United States conferences 2018 and 2019 (Eugene Or. & Boston Ma.), the 2018 New York Electro Acoustic Music Festival, the the International Computer Music Conference (I.C.M.C.)/ New York Electro Acoustic Music Festival joint 2019 event (both in New York, NY.), the 2018 and 2019 Electroacoustic Barn Dance (Jacksonville, FL), the 2020, 2021 and 2022 Earth Day Art Model online festivals, the 2021 New Music Gathering online conference, the Radiophrenia online event (2022) and the 2020/21 I.C.M.A. conference.
He is a member of the Irish Sound Science and Technology Association (ISSTA), S.E.A.M.U.S. the I.C.M.A. and has an interest in manipulating modular synthesizers with brainwaves. He holds a Doctorate in Composition in Music Technology, a BA in English and New Media (Hons) and an MA in Music Technology (Hons) from the University of Limerick, Ireland.
782
ID 782
A Dialogue, In Medias Res
A Dialogue, In Medias Res presents a speech monologue that is algorithmically rearranged via a quicksort process—visually represented by short fragments (fractions of a second each) of video, first arranged randomly.
The conceptual foundation for this algorithmic approach draws direct inspiration from Brian Christian and Tom Griffiths’ book “Algorithms to Live By: The Computer Science of Human Decisions,”1 which explores how computational algorithms can illuminate human decision-making and provide insights into navigating complexity in daily life. The book’s examination of how sorting algorithms bring order to chaos particularly resonated as a powerful metaphor for human meaning-making processes and the passage of time—themes central to this installation.
The technical realization of this work involved several intricate processes. The initial fragmentation was achieved by analyzing the complete, unbroken video in Max/MSP using transient detection to automatically identify syllable onsets within the speech, creating natural linguistic break points. These algorithmically determined fragments were then subjected to a custom quicksort algorithm programmed in Python, which systematically reorders the randomized segments toward their original sequence.
The quicksort algorithm2, 3—a pivotal conceptual and technical element in this work—functions through a “divide and conquer” approach to sorting. Beginning with a set of disordered fragments, the algorithm selects a pivot element and partitions the remaining elements into two sub-arrays: those less than the pivot and those greater than it. This process recursively continues within each sub-array until the entire sequence is ordered. The algorithm’s efficiency lies in its average-case complexity of O(nlog n ), making it significantly faster than simpler sorting methods for large datasets. In the context of this installation, the quicksort visually manifests as a gradual emergence of coherence from chaos—a computational metaphor for the human search for meaning and order.
During the disordered phase, video fragments play at four times their normal speed while maintaining the original speech pitch to avoid “chipmunk voice” distortion, creating a temporally compressed but vocally intelligible experience. This time-manipulation was carefully balanced through processing and combination in Adobe Premiere. The rapid juxtaposition of audio segments from different points in the timeline initially created unintended artifacts—clicks and pops from non-zero crossings—which were methodically addressed through audio restoration techniques in Adobe Audition to ensure sonic continuity despite the algorithmic disruption.
As the algorithm progresses from disorder to order, the video undergoes a deliberate aging transformation. This effect was accomplished through a multi- stage process beginning with the extraction of a reference frame from the original A Dialogue, In Medias Res, 2025 ICMC ’25, June 8–14, 2025, Boston, U.S.A video footage. This frame was then manipulated in Adobe Photoshop using Adobe’s Firefly AI4 generation tools to create an artificially aged version serving as a keyframe reference. The aged keyframe was subsequently processed through EbSynth,5 which intelligently mapped these aging modifications onto the original video’s movement patterns, creating a fluid temporal transformation that unfolds in parallel with the algorithmic sorting.
Concurrently, the voice undergoes its own aging process through subtle quality modifications programmed in Max/MSP, sonically mirroring the visual transformation with carefully calibrated audio processing that imparts the auditory impression of time passing. In a final twist, the voice morphs into instrumental sounds through Orchidea’s assisted orchestration process,6 which algorithmically generates a score that mirrors the emotional and structural transition of the piece. This work interrogates ideas of temporality, identity, and the transformation inherent in both human and computational processes, suggesting that order—even when achieved through random processes—can yield deeply human narratives.
Andrew A. Watts

Andrew A. Watts is a composer of chamber, symphonic, multimedia, and electro- acoustic works regularly performed throughout North America, South America, Europe, and Asia. His compositions have been premiered at world-renowned venues and cultural events such as Burning Man; Ravinia, the summer home of the Chicago Symphony Orchestra; Boston’s celebrated Jordan Hall; The Kitchen in New York City; the Internationales Musikinstitut Darmstadt; and the Holywell Music Room, the oldest custom-built concert hall in Europe. He has written works for many of the top new music groups today including Ensemble Dal Niente, Ekmeles Vocal Ensemble, Proton Bern, Distractfold Ensemble, RAGE Thormbones, Splinter Reeds, Quince Vocal Ensemble, Line Upon Line Percussion, as well as members of the International Contemporary Ensemble, Elision Ensemble, Talea Ensemble, Schallfeld, and the Mivos Quartet. Recently, in October 2024, Watts premiered an open instrumentation quartet, A Strobe Fractures Obsidian Night, which utilizes AI generated video and multichannel audio.
He completed his doctorate at Stanford University, studying with Brian Ferneyhough, Chris Chafe, and Jaroslaw Kapuscinski. Watts received his master’s with distinction from Oxford University and his bachelor’s with academic honors from the New England Conservatory. Watts is currently on the Music Composition faculty at the University of California Santa Barbara’s College of Creative Studies. At UCSB he is also an affiliate faculty in the Mellichamp ICMC ’25, June 8–14, 2025, Boston, U.S.A Andrew A. Watts Initiative in Mind & Machine Intelligence. From 2017 to 2022, he co-taught the summer workshop “Algorithmic Composition with Max/MSP and OpenMusic” at the Center for Computer Research in Music and Acoustics (CCRMA).
823
ID 823
Temporary structures
Temporary Structures are short works for piano and computer graphics that explore the connections between musical and visual lines. The piece uses algorithmic methods to seamlessly integrate musical and visual art, aiming for both fluidity and depth. In these two works, digital sculptural forms shift shape along coordinates defined by pitch, timing, and dynamics drawn from the musical gestures. This process creates a constantly evolving visual structure that reflects the music and is inspired by the philosophy and smooth forms of parametric architecture.
The premiere took place at the April in Santa Cruz Festival, where pianist Chia-Lin Yang performed alongside the accompanying computer graphics. The work has also been performed by pianist Eric Huebner at the Gassmann Electronic Music Series at UC Irvine, and by members of Dog Trio at the klub katarakt Festival for Experimental Music in Hamburg, Germany.
Beyond live performances, the work has been presented in various formats: as an installation featuring video and a synchronized Disklavier at the FFEMF Festival at the Crisp-Ellert Art Museum in Saint Augustine; as a wall-mounted video artwork at the Zeitgeist Gallery in Nashville and at the Performing Media Festival in South Bend, Indiana; and as a screening at WAVEFORMS, presented by the Museum of Science, Boston.
Matthew Schumaker

Matt Schumaker’s music engages with research into computer-assisted composition, interactive computer music with performers, and visual music. He received a doctorate in Music Composition from UC Berkeley (UCB), where he studied with Edmund Campion, David Wessel and others. Early on in his studies, Schumaker spent a formative year in Amsterdam, studying with Louis Andriessen. Later on, he spent a year in France through UCB’s Prix de Paris program, where he worked closely with Martin Matalon.
In recent years, Schumaker’s music has been performed by the UC Berkeley Symphony, Radius Ensemble, Dinosaur Annex, Winsor Music, New Music Works, Eco Ensemble, and the Left Coast Chamber Ensemble. Schumaker’s music has also been presented at festivals and curated events, including by clarinetist Rane Moore at the Virtual SICPP 2020, by pianist Chia- Lin Yang at the April in Santa Cruz Festival, by members of Dog Trio at the klub katarakt Festival for Experimental Music in Hamburg, Germany, by clarinetist Joshua Rubin at the soundSCAPE festival in Blonay, Switzerland, and by pianist Eric Huebner as part of the Gassmann Electronic Music Series at UC Irvine. Schumaker’s multimedia work for music and computer graphics has been shown at Zeitgeist Gallery in Nashville and at the Crisp-Ellert Art Museum in Saint Augustine.
From 2015-17, Schumaker was a Lecturer at UCB, teaching courses in computer music and music perception and cognition. During 2018-20, he was a Martin Luther King, Jr. Visiting Scholar at MIT. In fall 2021, he joined the Music Department at UC Santa Cruz as assistant professor.
909
ID 909
Miniscope Multimedia Project
This multimedia work explores neural activity through the integration of neuroscience and art. By processing calcium imaging data from the hippocampus CA1 and retrosplenial cortex (RSC) of a mouse brain, the project transforms complex brain functions into dynamic audio- visual experiences.
Utilizing TouchDesigner, neural data is visualized in real time, with calcium traces influencing the movement, size, and shape of visual elements. The soundscape is similarly shaped by Max/MSP, where the same neural data informs musical composition, manipulating filters and dynamics to create an immersive auditory experience.
Additional musical layers are composed in Logic Pro, interacting continuously with the visuals to form a feedback loop that enhances the overall presentation. This work reveals the intricate relationships between brain activity, sound, and visual art, showcasing the potential of interdisciplinary collaboration in understanding and expressing the complexities of the brain.
Si Hyun Uhm
Si Hyun Uhm is a distinguished composer, pianist, and multimedia producer based in Los Angeles and South Korea. She has received commissions from esteemed institutions such as National Sawdust, New Music USA, the U.S. Air Force Academy Band, Yamaha, Rice University, the Columbia Digital Audio Festival, and others. Her talent has been celebrated through prestigious recognitions, including Composer Fellowships with the American Composers Orchestra and the Nashville Symphony Composer Lab, along with awards from organizations such as “The President’s Own” Marine Band and the Arts Council of Korea.
A true artistic polymath, SiHyun moves fluidly across genres like classical, electronic, pop, rock, and film/game music. Her innovative work has earned her numerous accolades, including an award at the Shanghai International Digital Music Festival.
Most recently, SiHyun was honored with UCLA’s Graduate Fellowship Grant, supporting her pioneering work using Miniscope—a tool for capturing freely behaving animal brain neuron data—to create immersive multimedia art that bridges science and the arts. Additionally, the Davise Grant has enabled her to compose a project focused on endangered animals, blending music and education to inspire conservation awareness.
SiHyun’s work has been published by the Korea Electro-Acoustic Music Society and has been presented at notable conferences, including the CEAM Conference, Moscow Electronic Music Conference, WSU Pullman, and MOXsonic. Her integration of cutting-edge scientific data into acousmatic music and multimedia installations continues to shape her unique artistic voice.
SiHyun completed her undergraduate degree at the Eastman School of Music, followed by a master’s in composition from The Juilliard School. Currently pursuing her Ph.D. in composition at UCLA, she continues to explore new artistic territories, making meaningful contributions to music and multimedia art. She also holds a diploma from the Walnut Hill School for the Arts in Massachusetts.
Installations: Gallery
The gallery hours for this week are: Wednesday 12-10pm, Thursday 12-6pm, Friday 10am-6pm, and Saturday 12-6pm. Gallery Opening: Wednesday 6-10pm.
Media Arts Gallery
Emerson College, 25 Avery St.
ID
Installation Title
Author(s)
75
ID 75
Contained
Contained is an ambisonic/haptic virtual reality (VR) and immersive dome-based project that approaches anthropocentric spaces as haunted acoustic architectures, non-places permeated with the sonic spectres of unseen forces. Correspondingly, it seeks to invert the traditional, visually-biased audio-visual hierarchy via a positioning of spatial and haptic audio as the central elements within an immersive cinematic experience, emphasizing sound’s ability to not only permeate space and surround the auditor, but also its capacity to penetrate, saturate and vibrate a listening body, forming an intense relational bond between self and environment, whether material or virtual.
This project evolves from and roots itself within an acoustemological approach: acoustemology is best described by Steven Feld as engaging “sound as a way of knowing” [2]. Contained explores how concepts rooted in the discipline might bear an impact not only on how immersive audiovisual experiences are created, but also on how they might enable a unique, profoundly embodied encounter.
Thematically, this project presents an auscultation of our Anthropocentric milieu integrating field recordings, 360o camera footage and 3D scans of urban corporate towers, logistical networks, industrial areas and other non-places [1] as well as urban encampments and derelict locales that are resonant with both the heard and unheard acoustic emanations of the technotope we have become dependent upon for our survival. In doing so, it approaches sound as a material that can be apprehended as both corporeal and abstracted: in addition to the airborne, audible sound of the subject spaces, Contained integrates the electrical, vibrational and mnemonic emissions that permeate our everyday.
Michael Trommer

Michael Trommer is a Toronto-based sound and video artist; his practice has been focused primarily on psychogeographical and acoustemological explorations of anthropocentric space via the use of spatial and tactile sound, field recordings, VR, immersive installation and expanded cinema.
He has released material on an unusually diverse roster of labels, both under his own name as well as ‘sans soleil’. These include Transmat, Wave, Ultra-red, and/OAR, Audiobulb, Audio Gourmet, Gruenrekorder, Impulsive Habitat, Stasisfield, Serein, Flaming Pines, 3leaves, Unfathomless and con-v. His audiovisual installation work has been exhibited at Australia’s Liquid Architecture festival, Kunsthalle Schirn in Frankfurt, Cordoba’s art:tech, St. Petersburg’s Gamma Festival, and Köln’s soundLAB, among others.
Michael has performed extensively in North America, Europe and Asia, including events with members of Berlin’s raster-noton collective, as well as the 2008 and 2013 editions of Mutek’s acclaimed a/visions series. He also regularly improvises with Toronto-based AI audio-visual collective ‘i/o media’.
His sound design work encompasses composition, audio branding, installation and VR audio for clients such as Moment Factory, Intel and Yahoo, as well as soundtrack and production development for a variety of international cinema, dance and installation artists.
In addition to teaching graduate sound design and sound art at George Brown College, Michael also teaches Sound Film at Toronto Metropolitan University, Think Tank at OCAD University and Media Practice and Sonic Cinema at York University, where he is a recent PhD graduate and SSHRC Joseph-Armand Bombardier doctoral scholar in Cinema and Media Art.
274
ID 274
Foresta-Inclusive: (ex)tending towards
Foresta-Inclusive: (ex)tending towards is made up of two parts: 1) the ForestaInclusive sensing infrastructure installed in a forest and 2) the in-gallery connected immersive installation (ex)tending towards. The sensing infrastructure is comprised of three sculptural sensor pods that are battery powered and send live data to an Internet of Things (IoT) prototyping platform. The sculptural sensor pods can be installed in different forests unobtrusively onto the trunk of trees and sense phenomenon such as: soil and air temperature/humidity, particulate matter (.1 μm – 10 μm), light level, wind, volatile organic compounds (VOC), C02, and rain. The in-gallery installation uses the real time data and materializes it with visual, sonic and olfactory elements. Often the work is shown during the winter, and so in this case I use a past recording from a local forest protected by the rare Charitable Reserve in Blaire, ON. CA, in lieu of live data. (ex)tending towards is comprised of two projected visualizations, sculptural elements, a four-channel sound system, and a touchless scent interface.
In response to the temporal difference between tree and human individuals, this work explores ways to slow down human engagement, and to make visible the daily experience of a tree. The aim of the work is to find ways to demonstrate the absolute liveliness of the natural world as it unfolds all around us – yet more often than not – beyond our limited sensory perception. The first visualization materializes data as a particle flow field that gently undulates and is affected in real time by changing data.
Inspired by tree rings as evidence of yearly experience, the visualization is structured in the same manner and visualizes the last 24hrs of the tree’s life, where the outer ring shows contemporary values and each subsequent smaller ring the values from the previous hour. To interact with this visualization, there is a one-meter-tall cork cylinder that is also a scent sculpture, which releases the scent of geosmin (the scent of a forest after it rains) every time it rains in the forest. To interact, the participant uses a simple gestural interaction to move spatially into the visualization. The slower one moves, enables the participant to inspect each ring. The interface is embedded in soil, which also contain a set of sculptural sensor pods. Next to the visualization is a point cloud visualization of the tree at the rare Charitable Reserve. The point cloud was captured by a LIDAR scan of the forest at rare using a very large drone and rendered using Touch Designer. This point cloud is also affected in real time by live data. Like the visuals, the sonic elements materialize the forest data in a generative sound experience that balances between mimicry and poetic memory of forest experience. In its entirety this installation creates an embodied exploratory space where the deep time of a tree’s life is remembered, and the human body is slowed down in the engagement.
Jane Tingley, Hrysovalanti Fereniki Maheras
Jane Tingley is an artist, curator, director of the SLOlab (Systems | Life | Ontologies) and Associate Professor at York University. Her studio work combines traditional studio practice with new media tools – and spans responsive/interactive installation, performative robotics, and telematically connected distributed sculptures/installations. Her works are interdisciplinary in nature and explore the creation of spaces and experiences that push the boundaries between science and magic, interactivity and playfulness.
Her current work investigates the hidden complexity found in the natural world and explores the deep interconnections between the human and non-human relationships. As a curator her interests lie at the intersection art, science, and technology with a special focus on works that use new media tools to tell contemporary and socially compelling stories – both human and non. Curated exhibitions include Hedonistika (2014) at the Musée d’art contemporain (CA), INTERACTION (2016) and Agents for Change (2020) at the MUSEUM in Kitchener (CA), and more-than-human (2023) at Onsite Gallery (CA). As an artist she has participated in exhibitions/festivals in the Americas, the Middle East, Asia, and Europe – including translife – International Triennial of Media Art at the National Art Museum of China, Elektra Festival in Montréal (CA) and the Science Gallery in Bangaluru (IN). She received the Kenneth Finkelstein Prize in Sculpture, the first prize in iNTERFACES – Interactive Art Competition in Porto (PT), and has received support from a number of funding agencies including the Canada Council for the Arts, the Canadian Foundation for Innovation and the Social Sciences and Humanities Research Council of Canada.
Hrysovalanti Fereniki Maheras, also known as Hryso, is a computational art practitioner specializing in generative audiovisual art simulations and electronic kinetic art. She collaborated on the sound design of this project. Currently a Ph.D. candidate in Computational Arts at York University, she also serves as a studio instructor for audiovisual arts. Hryso’s artistic exploration involves seamlessly traversing between virtual and physical technological realms, aiming to create art that investigates the emergence of a virtual analog environment within a shared, intricate physical habitat. In addition to her academic and instructional roles, Hryso actively collaborates with fellow digital artists, holding a special connection with her Endemics Collective collaborators, with whom she predominantly engages in live coding performances. She has showcased her work through recent performances and art installations at prominent venues, including the International Conference for Live Coding (ICLC), International Conference for Generative Art(ICGA), Congress YorkU, Exit Points Array Space, Nuit Blanche, Trinity Square Video Art Gallery and New Adventures In Sound (NAISA) Art Gallery.
289
ID 289
Liminal
Liminal is a single-user, AI-assisted interactive audiovisual installation that merges gesture recognition, real-time sound control, and generative visual design grounded in classical Chinese aesthetics. The work draws upon the poetic motif of the “Peach Blossom Spring” as a metaphor for immersive escape and cultural reflection, reimagined here through contemporary computational media. At the heart of Liminal is a custom gesture-mapping system driven by com- puter vision. A high-frame-rate camera captures the participant’s hand and body movements in real time. The left hand triggers and modulates sounds derived from traditional Chinese percussion and xiao (bamboo flute), while the right hand controls timbral and articulatory variations of guqin-like textures, such as harmonics and arpeggios. These movements simultaneously influence a custom 3D particle system, generating visuals reminiscent of dynamic ink wash paintings. The result is an evolving visual environment that fuses digital abstraction with references to natural landscapes and lacquered ornamentation. Sound in Liminal is shaped through a real-time gestural mapping system. Using a high-frame-rate camera, the installation tracks the participant’s hand and body movements to control and modulate sonic elements derived from guqin, xiao, and percussive textures. Each gesture dynamically alters parameters such as pitch, articulation, layering, and spatialization—allowing the participant to sculpt a continuously evolving soundscape through motion alone. Designed for one-on-one interaction, the installation maintains focus and clarity by tracking a single participant at a time. This enables highly responsive audio-visual interplay and encourages an intimate, reflective mode of engagement. Each interaction becomes a unique and ephemeral composition, situated at the intersection of body, machine, and cultural resonance. Liminal invites participants into a space where gestures function as both input and authorship, transforming embodied presence into real-time audiovisual expression. It offers a sensory environment where memory, movement, and technology coalesce—evoking a digitally mediated encounter with introspection and transformation.
Zhitao Lin

Zhitao Lin is a forward-thinking composer whose work bridges traditional Chinese aesthetics, spectral music, and cutting-edge technology. Currently a Doctor of Musical Arts (DMA) candidate in Composition at the Peabody Institute of the Johns Hopkins University, he also holds a Master’s degree in Composition from Peabody and a Bachelor’s degree in Music from the University of California, Berkeley. His research focuses on the intersection of artificial intelligence and music composition, exploring new possibilities in sound art through deep technological integration. Lin’s work spans chamber music, orchestral compositions, opera, electronic music, and multimedia sound installations, earning recognition for its fusion of cultural depth and technological innovation. By blending Chinese musical traditions with spectral techniques and AI-driven creativity, he crafts a sonic world that is both avant-garde and deeply evocative. Influenced by Zen philosophy, his compositions often evoke a surreal, mystical quality, transforming abstract musical narratives into immersive experiences. His practice continues to explore new human-machine collaborations that expand the boundaries of musical expression.
438
ID 438
24 Cards
“24 Cards” is a computer music composition consisting of any number of participants for any duration. It is presented as a stack of interactive cards. Each card details a musical cell of the composition; on the reverse side, a QR code provides access to all other components of the piece. The cards represent individual elements of the composition as graphic depictions of software patches. Visitors are encouraged to take a card, and the set is replenished daily, ensuring the piece evolves with audience interaction. Whether explored individually or collaboratively, 24 Cards invites participants to engage with its modular structure and reimagine its possibilities.
This playful work seeks to escape the confines of a piece of music composed for and with a computer. This is partly done by conceiving patcher programming software as ‘score-instruments’, which are graphically reproducible computer programs for sound synthesis whose form contains sequential memory for execution, including chance-based operations. These musical cells can be distributed, recreated, repurposed, and reused from the cards on which they are presented. It is an emergent composition consisting of any number of participants for any duration. The piece’s form is an ecosystem of sound, with each cell contributing an element of sound design within it. The work was partly inspired by the motions of conferences, where attendees exchange cards and connect using their devices. What if this activity formed the mechanism of a composition?
William Turner-Duffin
Will is a musician, producer, and instrument builder based in Bristol, UK. His practice explores instruments as embodied models of music theory, embedding decision paths into their ergonomic forms and creating musical systems that can be activated, interacted with, or set in motion. Recent works include an aleatoric drum machine, which invites active collaboration with chance operations, and 30 Days of Score-Instruments, a month-long project that defined and explored the term through practice-led research. Will is perhaps best known for producing the first two releases by The Irrepressibles, including the widely recognised track In This Shirt. A lifelong metalhead, he also continues to release music through various imprints. He holds a BA in Creative Music Technology from Bath Spa University and an MA in Sonic Arts from Middlesex University. He is currently undertaking a PhD in the School of Design at Bath Spa University.
508
ID 508
Rings... Through Rings
Rings. . . Through Rings reimagines Hong Kong’s military cartographic heritage through an innovative sound art installation. Utilizing historical maps as the foundation, the project transforms archival geographical data into laser-etched vinyl discs, serving as both physical artifacts and sound-generating devices. This artistic reinterpretation emphasizes three significant sites: No. 55 Ha Pak Nai, Tung Chung, and Lau Tau at the Lantau Island Forts, symbolizing the intersection of natural landscapes and military infrastructure [2]. The installation features an integrated system composed of three key elements. At its core are custom laser-etched vinyl discs, crafted from PVC and embedded with encoded geographical data. When these discs are played on specially modified turntables, their physical grooves, shaped by both data and material properties, generate distinctive sonic textures. These turntable stations, three in total, are enhanced with real-time processing capabilities, inviting participants to actively manipulate and shape the sound output through direct interaction. Complementing this tactile experience is a TouchDesigner-based audiovisual system, which responds to turntable gestures by rendering dynamic visual projections and immersive spatialized soundscapes [1]. Together, these components form a responsive and exploratory environment that bridges data, sound, and performance. Participants physically engage with the turntables, altering rotation speeds and playback direction, which in turn influences real-time audio processing and projection mapping. This interactive feedback loop creates a multisensory experience, bridging past and present through tangible sonic exploration. The work demonstrates technology’s capacity to preserve and reinterpret cultural heritage through sonic immersion and interactive design. By mapping cartographic data onto sound, Rings. . . Through Rings not only explores historical memory but also invites reflections on the evolving relationship between landscapes and human intervention.
Tak-Cheung Hui, Xiaoqiao Li, Chun Ting Vincent Chan
HUI Tak-Cheung, a Hong Kong-born composer, creates works spanning chamber and orchestral music, electronic pieces, sound installations, and interdisciplinary projects. His multidisciplinary approach integrates immersive audio, spatial sound, and advanced music technologies to reconstruct soundscapes and tell stories across eras and cultures. His recent interdisciplinary works have been presented at the C-LAB Sound Festival 2022, the Taiwan Pavilion at the 2023 Venice Architecture Biennale, and the Yamaguchi Center for Arts and Media. His orchestral work was featured at the Hong Kong Arts Festival. In 2023, he also served as composer-in-residence at the EstOvest Festival 2023 in Turin. Hui has been recognized with numerous international prizes, including the 38th Irino Prize, Chaosflöte Commission Competition 2019, Flex Ensemble Commission Competition 2017, Leibniz Harmonien International Composition Competition 2016, ACC International Composition Competition 2016, and 2014 Atlas Ensemble Composer’s Competition. His compositions have been showcased at various festivals, including the Huddersfield Contemporary Music Festival, Gaudeamus Muziekweek, ManiFeste Festival, ISCM Taipei New Music Festival, and Goethe Institut Asian Composers’ Showcase. Hui received his DMA from Boston University. During his academic years, he was awarded numerous grants and fellowships, such as the Boston University Center of New Music fellowship, Kahn Career Entry Award, and Design Trust Seed Grant. In 2017-18, he furthered his studies at IRCAM Cursus, supported by Boston University Graduate Research Grants. He is currently a faculty member at the Hong Kong Metropolitan University School of Arts and Social Sciences.
Xiaoqiao Li is an artist, academic, and researcher whose work examines the intersection of analogue imprints and digital imprints, particularly in analysing digital print matrices. Li’s practice-based approach sheds light on the complexities of printmaking in the digital era by investigating how digital imaging information is captured, retained, lost, and transmitted. Li holds a BA in Visual Arts from Macao Polytechnic University, an MA in Visual Arts: Printmaking from Camberwell College of Arts, University of the Arts London, and a PhD from the Academy of Visual Arts, Hong Kong Baptist University, supported by the Hong Kong PhD Fellowship Scheme. His PhD thesis was selected by the Leonardo Graduate Abstracts (LGA) Peer Review Committee as a top-rated LABS Abstract for advanced research in Art and Science, published by Leonardo (MIT Press Journals). Li’s work has been exhibited internationally, earning accolades such as the Clifford Chance Purchase Prize (UK) and the Chinese Young Artists’ Work Award at the Beijing International Art Biennale. Beyond his studio practice, Li actively contributes to academia through presentations at conferences and articles published in the IMPACT Printmaking Journal and Leonardo (MIT Press), fostering dialogue among artists and scholars in both traditional printmaking and digital art. As he continues to learn and grow, Li hopes that his continuous efforts will contribute to evolving discussions in the field.
Chun Ting Vincent Chan is a music technologist, sound designer, electronic musician and educator based in Hong Kong. From sound design for media to experimental sound works, Vincent Chan is a multi-faceted sound and music expert, wearing many hats over the years. He is the founder of Wide Open Sound Effects, a company dedicated to creating high-quality sound effect libraries for sound designers worldwide. His previous freelance work also involved premium brands such as Ferrari and Maserati. His recent sound design work includes Proof As If Proof Needed (2025) with Blast Theory UK. Vincent holds an MSc in Music Technology from Staffordshire University and a BA in Music Technology from Keele University in the UK. Currently, he serves as an Assistant Lecturer at Hong Kong Metropolitan University, where he teaches courses in media production, sound design, and computer music. His academic contributions also extend to research, playing a role in the FDS-funded project Echoes of the Past (2025), which explores historical soundscapes in Hong Kong. Vincent also composes electronic music under the artist name, Altz. His music release can be found in the back catalogue of Maker Records, Singapore.
724
ID 724
Summerland
Summerland is an installation for an array of 24 antique telegraph sounders. These Morse code receivers, which mechanically produce tapping sounds, were the dominant form of long-distance communication for most of a century, requiring a skilled listener to translate the taps — sonic representations of dots and dashes – into language. These devices are arguably the first digital-to-analog converters, turning binary information into an audio stream of rhythmically modulated clicks. It is also, by its use of fluctuating voltages to create sound, the primogenitor of the audio speaker, the telephone, and most modern sound reproduction technology.
Invisible action at a distance – what we now call telematics – has always been a hallmark of the supernatural. Once the telegraph appeared, making instantaneous long-distance communication possible, it did not seem unreasonable to many that a ‘spiritual telegraph’, as it was often called, could use other invisible but verifiable forces to communicate with the world beyond [1]. From its beginning, information technology has always been entangled with myth and dreams of transcendence.
Summerland recuperates this most ancient of electric sound producers, using modern, computer-controlled forms of digital transmission to reanimate this prototypical apparatus, so it can speak once again. It uses speech and text from two major figures in 19th-century communications as source material which drives the sounders, each speaking in their own particular idiom. One of those voices is that of Samuel Morse himself, the inventor of the electro- magnetic telegraph. Excerpts from his writings and letters are translated into the dot-dash code that bears his name – a cipher once understood by many but now on the verge of extinction. The words and letters of the texts, while holding true to the rhythmic structures of Morse code, are often broken up, distributed among multiple sounders, and subjected to accelerations and deaccelerations based on a rule set of first- and second-order Markov chains. The other is that of the medium Kate Fox, whose experiences of ghostly ‘rappings’ in her house a few years after the invention of the telegraph led to the founding of Modern Spiritualism, and the craze for communicating with the dead that has lasted into the present day [2]. Rather than focusing on the glyphic character of written language as I did for Morse, I made recordings of transcripts of her séances, and inspired by the (more-or-less failed) early telephony experiments of J.P. Reis, which used rapid series of clicks in an attempt to recreate the human voice [3], I tried to do the same with my choir of sounders. I subjected these recordings to a number of different forms of spectral analysis, and developed several mapping strategies to somehow (at least subjectively) recreate something vaguely vocal. Given the mechanical limitations of the sounders, I knew this entire approach was somewhat absurd, but perhaps no more so than communication with the dead.
This conversation is created by a generative algorithm which uses complex decision-making structures to fashion a never-repeating dialog, in which layers of utterances, reproduced as taps and clicks, retain the character of language while remaining forever out of reach. In the context of contemporary magical thinking about media, Summerland looks back at the archaeology of communications, a séance which seizes from the ether the dead voices of Morse and Fox, materializing their words in streams of clicks through a medium — the sounder — no longer able to articulate. Thus the promise, and ultimate failure, of communication with the past is built into the very attempt to make the sounders speak. However, the psychic and electromagnetic forces we can summon can still, in the act of materialization, evoke the dimly-seen ghost, the unnerving rap of unseen knuckles on the medium’s table.
Matthew Ostrowski

A New York City native, Matthew Ostrowski is a composer, performer, and installation artist. Using digital tools and formalist techniques to engage with quotidian materials – sonic, physical, and cultural — Ostrowski explores the liminal space between the virtual and phenomenological worlds. Engaged with tropes of interruption and flux, his works function as environments in a constant state of change, exploring the process of consciousness in its constant state of collision with the world.
Educated at Oberlin College and the Institute of Sonology in the Hague, his work includes live digital solo and ensemble improvisation, multichannel fixed-media electronic compositions, and algorithmically-generated installation pieces for video, multichannel sound, and robotically-controlled objects. Ostrowski’s productions and performances have been seen or performed on six continents, including the Wien Modern Festival, Transmediale and Maerz Musik in Berlin, the Kraków Audio Art Festival, Sonic Acts in Amsterdam, PS 1 and The Kitchen in New York, the Rencontres Internationales video festival in Madrid, Unyazi in Johannesburg, and Yokohama’s dis_locate festival. He has received numerous grants and awards, including a NYFA Fellowship for Computer Arts, and his essays have been published in the Performance Art Journal and Leonardo. He currently works as an independent programmer and researcher, and holds teaching positions at New York University and Ramapo College of New Jersey.
874
ID 874
the ground beneath our feet, the air inside our lungs
the ground beneath our feet, the air inside our lungs grew from a desire to extend my work with infrasound into environments where the high sound pressure levels needed to create simultaneous tactile and auditory sensation with loudspeakers would be impractical. Since infrasonic energy propagates much more readily in solids than in the air, designing a system that could present ultra-low frequency compositions mechanically seemed to be a fruitful path.
This led to the creation of the “haptic hammock,” a hammock chair with Dayton Audio TT25-8 tactile transducers mounted on each side of the support frame. The hammock is supported far from the fulcrum at the frame’s legs and the transducers are mounted close to the hammock hooks, resulting in an efficient transfer of energy from the transducers to a person seated in the chair. While very low frequency energy is strongly presented, the elasticity of the hammock’s support webbing damps higher frequency energy, softening transients that have potential to be physically uncomfortable.
The hammock chair provides a cocoon-like micro-environment for the person seated in it. I found this to be a very pleasureable, safe place and I wanted to create a shared experience for visitors while still preserving this sense of security. To do this, I have developed a communication network based on the Seeed Studio MR60BHA2 mmWave contactless breathing and heartbeat detection sensor. The detected state of each visitor directs an instance of the composition, which is relayed to the transducers on their and on other visitor’s hammocks via a Max/MSP patch. This system allows for flows of information between visitors without breaking the visual and acoustic privacy afforded by the hammock.
While this system allows nonverbal communication between the visitors seated in the hammocks, it does not include anyone else in the environment. Additionally, there will be times when there is only one visitor, and something is needed to move their experience forward. Both concerns have been addressed via the inclusion of a seismic accelerometer in the composition’s control system. The accelerometer is sensitive enough to detect the activity both in the immediate vicinity of the device but also in other parts of the building and heavier activity like road traffic at significant distance. The inclusion of the accelerometer ensures an always-evolving experience for any number of participants while connecting them to their greater surroundings.
As we watch the continuing genocide in Gaza and abductions of our neighbors by government agents in broad daylight in the United States, there can be a strong desire to retreat into a belief that these things don’t concern us, that these tragedies are things that happen to other people. At this moment, it felt especially critical to create a work that tries to dispel the myth of independence, the false belief that one person’s life can exist in isolation from the lives around them, human or otherwise. It is my hope that this work will provide a space in which people can reflect on the depth of our interdependence and renew their commitments to mutual support.
Matthew Azevedo

Matthew Azevedo (b. 1977) is an artist, educator, and researcher based in Providence, RI whose work is focused on the outer edges of human perception, in particular the liminal space between touch and hearing occupied by infrasound. They are most widely known for their recorded works and international performances as Retribution Body, composing site-specific works for architectural spaces driven into resonance by massive custom subwoofers.
After receiving their BM in Sound Recording Technology in 1999, M. began a career as a mastering engineer of more than 2000 projects, including Grammy winners, over the last 25 years. In 2010 they accepted a research fellowship at Rensselaer Polytechnic Institute, where they earned a master’s in Architectural Acoustics while studying improvisation and composition with Pauline Oliveros. This led to a position of Senior Scientist at Acentech, where their research and consulting work focused on ambisonic auralization, acoustic simulation, and the design of performance and studio spaces for music. M. is currently an assistant professor of Sound Recording Technology at the University of Massachusetts, Lowell teaching courses in acoustics and psychoacoustics, sound synthesis, and advanced audio theory.
908
ID 908
Entrainment
Entrainment, originally entitled Entrainment718, was inspired by a hundred-foot stretch of the Brooklyn F train that connects the subterranean to the above-ground. On this transitional stretch, the visual interplay of regularly-spaced pylons against haphazardly strung high-intensity work lights causes a skewing of parallax— looking out the window, depth perception becomes distorted, as though suddenly careening through a starfield. Entrainment is an exploration of that sense of disorientation, and the hypnotic and transcendental states that often emerge from the repetitious visual, auditory, and haptic polyrhythms experienced aboard a moving train. This project continues to grow, include footage from rail-based public transportation from around the world.
Audio consists of a multi-channel sound system arranged linearly. The original musical composition (16mins) consists of several textural layers that are distributed spatially, running up and down the multi-channel array in gated sequences. The effect is that of a passing train, while also evoking the rhythmic quality of being on board the train. Each channel is synced with the video panel directly behind it and when that audio channel is active, a variety of visual effects is applied to the corresponding panel. In this way, one can visually track the sonic placement of sound in space (source-bonding [1] à la Denis Smalley’s Spectromorphology).
Additionally, a haptic channel plays infrasonic polyrhythmic patterns.
The auditory (score length vs spatial distribution), visual (footage vs effects programming) and haptic layers are all cycles of different lengths. As they loop, the layers stack in new combinations. This gestalt of sensory information that drift in and out of synchronization is an example of the nested or overlapping rhythms described in Henri Lefebvre’s Rhythmanalysis [2], in which the body becomes a metronome that not only observes but feels—embodies—temporal perception.
Most notably, the panoramic is actually a single-channel portrait-mode video (60mins) shot on a camera-phone, repeated 14 times and mirrored vertically. Each column is an instance of the original footage offset by 23 frames; in essence, they are pulling from the “memory” of the video, and placed side by side, they become stitched together to form what appears to be a panoramic view. The perceived elongation of the image is achieved through a repeated temporal and spatial displacement.
As a phenomenological experiment, Entrainment draws from a number of concepts from neuroscience and psychotherapy:
– The frame delay between each offset column is variable; when processing in real time, the number of frames delayed can be modulated (via OSC or MIDI) to slow down or speed up the cascade of images. This variability plays off the elastic perception of time—when cortisol spikes under acute stress, people often report a slowing down of time. Memory becomes denser as more sensory information is processed per unit of time.
– The linear arrangement of the images and sound is designed to emulate the lateral eye movement experienced in deep sleep. Side-to-side eye movements are tied to internal visual processing and memory consolidation. REM sleep plays a key role in processing emotional memories, especially those with strong visual or affective content. The amygdala and hippocampus are both highly active during REM, and lateral eye movements may support integration of experience, emotional regulation, and the formation of semantic associations.
– The left-right travel of auditory, visual and haptic cues mimics the use of such techniques to treat PTSD in EMDR (Eye Movement Desensitization and Reprocessing) therapy. The multi-sensory rhythmic patterning causes bilateral stimulation, activating both the left and right hemispheres of the brain. This is thought to help integrate fragmented traumatic memories, which are often stored in a disjointed or somatosensory form (right hemisphere) and not fully processed by language- and logic-dominant regions (left hemisphere).
Entrainment only refers to such concepts as a compositional tool for artistic expression; the use of such media-rich environments to support therapeutic efforts has yet to be fully explored. However, Entrainment is designed with flexibility to respond in real-time to data streams (e.g. EEG) to facilitate interdisciplinary study.
Shomit Barua

Shomit Barua is a Japanese-born, Desi-American intermedia artist specializing in ecoacoustics, responsive environments, and emergent narratives. His work is rooted in poetry and architecture, and reflects the shared tenets of contained space, economy of materials, and movement that is both physical and emotional. Combining everyday technologies with esoteric programming languages, he blurs the line between installation and performance, weaving together object, sound and image. Digital and analog techniques are fused to investigate his core subject: corporeal presence in a physical space.
Having collaborated with sculptors, dancers, musicians, architects, and visual artists, he believes that exploration of a motif is amplified – made “robust” and “thick” – through dialogue between disciplines. He holds an MFA in Poetry from Bennington College and teaches writing at Arizona State University while completing his doctoral research at ASU’s School of Arts, Media, and Engineering.
920
ID 920
MELT (topographic remix)
I made this project after wandering around the world’s most active glacier, in Ilullisat, Greenland, with my mother and 5-year-old daughter and the film’s cinematographer, Troy Fairbanks. All synthesizers are made out of processed field recordings from this trip. The slow film stills and music are increasingly interrupted by audiovisual glitches, representing tipping points of our warming climate; the timing of these glitches was determined by a Max patch converting sea ice extent data to probability. The vocal music was created collaboratively with the members of Moving Star vocal ensemble from an open score I composed. The music was recorded by Jeff Cook at 2nd Story Sound, mixed by Michael Hammond of Big Ship Audio, and the spatial Dolby Atmos mix was created with Sean Winters. The film itself premiered at the IMAX Theatre at the Denver Museum of Nature and Science.
As I struggle to give my child the same intimacy with the wild I enjoyed , I find myself wondering: will there be winter in the world when she is my age? So many films invite you to think about the facts and figures of climate change; I wanted to offer people something different: a glimpse at the experience I had in Greenland, a chance to slow down and feel, and be with the earth’s ice as it melts and spills its way through climate change, to witness their own feelings as music washes over them. I hope it resonates with you.
A note on this immersive installation: MELT: the memory of ice as a film is a work of cinema; sounds and images in the shape of a cinema screen. As an installation, I wanted to truly use the gallery space to do something more than just press play on a movie, to explore the imaginative possibilities of transforming and relocating the sounds and visuals, to create a new kind of space for the audience to wander through. The visual transformations, especially, allow me to understand the images in a different, more imaginary way: a kind of inner topography. So I’ve titled this installation work MELT: the memory of ice (topographic remix).
Betsey Biggs
Betsey Biggs (Writer/Director/Composer) is a composer and media artist whose work connects the dots between sound, image, place and technology. Her work has been described by the New Yorker as “psychologically complex, exposing how we orient ourselves with our ears.” For more than twenty-five years, she has composed music, created live multimedia performances, and created participatory art installations. She earned a Ph.D. in music composition at Princeton University, and has taught music, multimedia, public art, photography, and video at Brown University, RISD, and the University of Colorado Boulder, where she currently serves as Assistant Professor of Critical Media Practices.
Troy Fairbanks (Director of Photography) has a well-rounded filmmaking career as a director, videographer, cinematographer, and drone operator. His Denver production companies, Makēda Creative and Rise Aerials, specialize in action sports, documentaries, and drone cinematography.He has created more than 800 video projects in 31 countries, with a special focus on flying FPV drones for commercial purposes. When he’s not behind the camera, you can find Troy and his wife traveling the world in their converted school bus, enjoying the outdoors and board sports, and chasing one adventure or another.
Moving Star is a vocal ensemble creating original music infused with improvisation. They are an artistic community partner of the Carnegie Hall Education Wing. The performers of Moving Star have collaborated with Meredith Monk, Julia Wolfe, Ann Hamilton, and Sufjan Stevens, and have performed at Zankel Hall, Whitney Museum, La MaMa, Symphony Space, and elsewhere.
* winner, Berklee College of Music internal music composition competition for ICMC Boston 2025
Registration is now closed.
ICMC BOSTON 2025 can be accessed IN-PERSON and REMOTE). ICMA Members at the time of registration will receive a 25% discount.
Early Bird Registration: pre-May 1, 2025 (15% discount)
Regular Registration: post-May 1, 2025
Contact Us
Sponsored by