ICMC BOSTON 2025
Club Concerts
50th Anniversary International Computer Music Conference
June 8-14, 2025

ICMC Boston 2025: Club Concerts
PLEASE NOTE: We are populating the details of this schedule in real-time, please return to this page as we move closer and closer to the conference… for any questions or concerns, please contact a.deritis@northeastern.edu.
Club Concerts will be held at two venues during ICMC Boston 2025 from 10:00pm to 12:00am midnight at Northeastern’s Raytheon Amphitheatre (Sunday, June 8; Thursday, June 12; Saturday, June 14) and Club 88 at Kings Dining & Entertainment (Monday, June 10; Tuesday, June 11; and Wednesday, June 12). All Club Concerts will be accessible remotely as well.
Please direct all questions related to Club Concerts to Anthony Paul De Ritis, ICMC Conference Chair: a.deritis@northeastern.edu.
OPENING NIGHT RECEPTION AND CLUB CONCERT
Sunday, June 8; 10:00pm – Midnight
Raytheon Amphitheatre (240 Egan Research Center), Northeastern University
ID
Title
Author(s)
Performers
389
ID 389
Lines and Circles
Lines and Circles is an improvisational live performance project using a custom-built modular synthesis system. With numerous hands-on controls and many physical patch points, this playful system requires constant attention and multiple, ongoing direct engagements. Concentration, careful listening, muscle memory and musical / sonic problem solving are all requirements for designing and performing on such a system. The sonic outcome can range from subtle modulations to noisy disruptions, and everything in between. Each performance uses a unique combination and interconnection of modules, so the instrument / system is always evolving. For both performer and audience, this is an opportunity to share in the ongoing discovery of what might happen next.
Thomas Ciufo
Thomas Ciufo is a sound artist, composer, improviser, and music technologist working at the intersections of electronic music, sonic art, and emerging technologies. Additional research
interests include acoustic ecology, listening practices, and innovative approaches to teaching and learning. He has performed and presented his work at numerous national and international
experimental music festivals and conferences. His fourth CD (The Rising Moon) will be released in the summer of 2025 on the NEUMA record label. Thomas joined the Department of Music at Mount Holyoke College in 2017 and currently serves as a faculty affiliate in the Fimbel Maker and Innovation Lab and Director of the Arts and Technology Labs.
Thomas Ciufo, custom-built modular synthesis system
Thomas Ciufo is a sound artist, composer, improviser, and music technologist working at the intersections of electronic music, sonic art, and emerging technologies. Additional research interests include acoustic ecology, listening practices, and innovative approaches to teaching and learning. He has performed and presented his work at numerous national and international experimental music festivals and conferences. His fourth CD (The Rising Moon) will be released in the summer of 2025 on the NEUMA record label. Thomas joined the Department of Music at Mount Holyoke College in 2017 and currently serves as a faculty affiliate in the Fimbel Maker and Innovation Lab and Director of the Arts and Technology Labs.
abat-son
abat-son is taken from on the upcoming record Cylinder Plus (42’21”) by TRAC on the label Bu Lang Tribute Cake.
The compositional process for “abat-son” functioned as call-and-response between the work’s creators. “abat-son” is an architectural device constructed to reflect or direct sound in a particular direction. Beginning with a negative isolation of early reflections and noise components from a recording of speech, the source material was approached in the image of this device with processing. Machine learning audio techniques and iterative spectral and electroacoustic processing emphasize and direct timbral and rhythmic components. A progression of reconstitution and deformation transforms the single source into separate forms divorced from their origin.
Edward Ryles; Alex Cooper
TRAC is Edward Ryles (New York) and Alex Cooper (North Carolina).
Ryles is an electroacoustic musician focusing on tool-building for digital improvisation. His projects have been released on Miami’s Schematic Music Company. Ryles studied electroacoustic composition at Tulane University. Teddy is a Master’s student in Music Technology at NYU specializing in DSP.
Cooper is an interdisciplinary artist. He studied experimental music and art at CalArts. His projects have been released on UK labels Opal Tapes and Indole Records. He is currently completing a Master’s of Science in Computer Science at North Carolina State University.
Edward Ryles; Alex Cooper
TRAC is Edward Ryles (New York) and Alex Cooper (North Carolina).
Ryles is an electroacoustic musician focusing on tool-building for digital improvisation. His projects have been released on Miami’s Schematic Music Company. Ryles studied electroacoustic composition at Tulane University. Teddy is a Master’s student in Music Technology at NYU specializing in DSP.
Cooper is an interdisciplinary artist. He studied experimental music and art at CalArts. His projects have been released on UK labels Opal Tapes and Indole Records. He is currently completing a Master’s of Science in Computer Science at North Carolina State University.
Toy Phantasy, for piano and fixed media
Toy Phantasy for piano and fixed media was composed in 2022-2023 for Luo Ting, the first performer of the work. The video was created specially for this piece by Anne Stagg.
The piece has been inspired by the poem by John Updike “Player Piano,” originally published in “The New Yorker” magazine and quoted below. While the poem is about the player piano, some of its characteristics and specific words used by the author (“click,” “chuckling,” “pluck”) reminded me of a different but related instrument – a toy piano, which I sampled and included in the piece along with select words from the poem. The poem has a multi-layered rhyme structure: each line has several assonances, and each line is rhymed after another line.
Most intriguing for me were certain words in this poem (such as “misstrums”), which led me into writing a phantasmagoric piece with pre-recorded spoken and sung words borrowed from the poem and transformed freely, using computer software and my imagination.
Player Piano
My stick fingers click with a snicker
And, chuckling, they knuckle the keys;
Light-footed, my steel feelers flicker
And pluck from these keys melodies.
My paper can caper; abandon
Is broadcast by dint of my din,
And no man or band has a hand in
The tones I turn on from within.
At times I’m a jumble of rumbles,
At others I’m light like the moon,
But never my numb plunker fumbles,
Misstrums me, or tires a new tune.
– John Updike
Vera Ivanova
Vera Ivanova is an internationally known composer of contemporary concert, music theatre, electro-acoustic, and youth ensemble music whose works have been performed in the U.S.A., Europe, Asia, and Russia. Her compositions leave an impression of “…humanistic and deeply felt works… ” (John Bilotta, Society of Composers, Inc.), and her chamber opera The Double (2022), was described as “a beautifully precise and masterful work with careful attention to every aspect of the production.” (Paul H. Muller, Sequenza21).
Dr. Ivanova is a recipient of American Composers Forum Subito grant, Honourable mention at the 28th Bourges Electro-Acoustic Competition, 3rd Prize at the 8th International Mozart Competition, 1st Prize in Category “A” at International Contest of Acousmatic Compositions Métamorphoses 2004 (Belgium), the ASCAP Morton Gould Young Composers Award, the André Chevillion-Yvonne Bonnaud Composition Prize at the 8th International Piano Competition at Orléans (France), and the Special Award from Yvar Mikhashoff Trust for New Music. She is also a winner of the 2013 Athena Festival Chamber Competition and the 2013 Earplay Donald Aird Composers Competition and was selected for prestigious residences at MacDowell Colony, Yaddo, UCROSS, Brush Creek Foundation for the Arts, Kimmel Harding Nelson Center for the Arts, Helen Wurlitzer Foundation, Willapa AiR, Millay Colony for the Arts.
Her music is available in print from Universal Edition and Theodore Front Music Literature, Inc., SCI Journal of Music Scores (vol. 45), on CDs from Reference Recordings (Grammy-nominated Nadia Shpachenko’s Quotations and Homages Album), MicroFest Records (Grammy-nominated Beyond 12 Album), Ablaze Records (Millennial Masters series, Vol. 2), Quartz Music, Ltd., Navona Records (Nova and Allusions albums), Musiques & Recherches (Métamorphoses 2004), Centaur Records (CRC 3056), Soundiff (Miniatures Album, vol. 1), BandCamp, and on her website at: www.veraivanova.com.
Anne Stagg, piano
Anne Stagg, an artist and educator based in Tallahassee, Florida, creates abstract paintings that range from vibrant to almost ghostly white. Her work delves into the complexity of human interactions, societal systems, and the evolution of ideas over time. By employing metaphor through pattern and color, Stagg examines how layers of ideas interact, transform, and ultimately reveal their underlying structures.
Stagg draws inspiration from everyday human interactions and the intricacies of seemingly simple tasks. Her exploration spans various systems—legal, financial, regulatory, social, and political—focusing on their promises and the extent to which they streamline our lives. Using non-objective, geometric shapes and patterns, she captures the tension between harmony and contrast, stability and change, while investigating the rich variety of outcomes within a constrained set of visual elements.
Her work thrives on contradictions. Stagg is captivated by moments of resolution but chooses to expose the underlying frameworks that support them. Through the layering of patterns, she alters the appearance and texture of her paintings, unveiling the scars of previous layers. As Buzz Spector insightfully remarked about the nature of the surface layer in Stagg’s paintings, they are “less so many shrouds as bed linens with living limbs beneath.”
Langue Étrangère
Langue Étrangère reflects on the process of learning and inhabiting a foreign language, where meaning fluctuates between comprehension and abstraction. Constructed entirely from processed recordings of the composer’s own voice, the piece unfolds as a multi-layered sonic landscape interweaving spoken fragments in French, English, Japanese, and Chinese.
The composition is structured in four distinct movements, each portraying a different stage of linguistic and emotional adaptation: Situation (Situation), Paroles Indiscrètes (Indiscrete Words), Romances sans Paroles (Romance Without Words), Habiter la Langue (Inhabiting Language).
At the core of this work is a reflection on what it means to dwell within language — to navigate its barriers, its estrangements, and ultimately, its transformative beauty. The following text, translated across languages and embedded within the piece, encapsulates this experience: “Feeling like a foreigner in a foreign country is a complex and bittersweet experience. It’s as if every sound, every sign, every social code is slightly out of reach, like listening to an unfamiliar melody and desperately trying to hum along.”
Héloïse Garry
Héloïse Garry is a composer whose practice bridges filmmaking, theater, and performance, exploring the aesthetics of “totality” and interactivity across waves, bodies, and art forms. Her compositions and performances, ranging from immersive electronic pieces to audiovisual installations, reflect a deep interest in cross-cultural and linguistic exprimentations and sonic storytelling. As a Yenching scholar at Peking University, she researched the politics of independent Chinese cinema and the significance of music in the cinema of Jia Zhangke. Héloïse has performed live electronic music internationally, while also engaging in public and cultural diplomacy across the United States, Europe, and Asia. She has collaborated with IRCAM and the Columbia Computer Music Center, and worked on the sonification of the universe under the mentorship of Physicist Brian Greene. In September 2024, she joined Stanford’s Center for Computer Research in Music and Acoustics (CCRMA), where she works under the supervision of composer Mark Applebaum and researcher Ge Wang, developing compositions that merge acoustic and electronic elements. Trained as a pianist, she studied with François Weigel, Maniola Camuset-Trebicka, and Dmitry Alexeev. More recently, she joined Dr. Chan’s studio at the Stanford Music Department. In addition to her compositional work, she is a member of the Stanford Taiko Ensemble, where she actively contributes new pieces to the group’s repertoire. Héloïse holds bachelor’s degrees in Filmmaking, Economics, and Philosophy from Columbia University, Sciences Po, and Sorbonne University.
Héloïse Garry
Héloïse Garry is a composer whose practice bridges filmmaking, theater, and performance, exploring the aesthetics of “totality” and interactivity across waves, bodies, and art forms. Her compositions and performances, ranging from immersive electronic pieces to audiovisual installations, reflect a deep interest in cross-cultural and linguistic exprimentations and sonic storytelling. As a Yenching scholar at Peking University, she researched the politics of independent Chinese cinema and the significance of music in the cinema of Jia Zhangke. Héloïse has performed live electronic music internationally, while also engaging in public and cultural diplomacy across the United States, Europe, and Asia. She has collaborated with IRCAM and the Columbia Computer Music Center, and worked on the sonification of the universe under the mentorship of Physicist Brian Greene. In September 2024, she joined Stanford’s Center for Computer Research in Music and Acoustics (CCRMA), where she works under the supervision of composer Mark Applebaum and researcher Ge Wang, developing compositions that merge acoustic and electronic elements. Trained as a pianist, she studied with François Weigel, Maniola Camuset-Trebicka, and Dmitry Alexeev. More recently, she joined Dr. Chan’s studio at the Stanford Music Department. In addition to her compositional work, she is a member of the Stanford Taiko Ensemble, where she actively contributes new pieces to the group’s repertoire. Héloïse holds bachelor’s degrees in Filmmaking, Economics, and Philosophy from Columbia University, Sciences Po, and Sorbonne University.
In and Out of Phase
In this piece, the performers play a double bass together using a system built out of a microphone, two audio exciters, and a mediating software program. As the bassist physically activates the instrument, the microphone signal triggers several computer operations—one involving an artificial neural network (ANN)—that play processed and synthesized sounds into the body of the bass using transducers attached to the front of the instrument. The bassist influences this complex feedback loop physically on the bass, while the electronicist adjusts parameters of the software. The hybrid mechanical and computational apparatus reveals an overlooked non-human agency with which every human agent must negotiate. Humans understand the inherent agency of objects as we learn to play an instrument, ride a bicycle, or manipulate tools. We soon become so accustomed to this agency that we stop thinking about it. Thus, our project is a way of reminding ourselves what that agency looks, sounds and feels like.
The work highlights modern technology’s influence on contemporary artistic practice. Imagination and interconnectedness are foregrounded as the performers become intricately linked by the hybrid mechanical-computational interface. The transformation of sound and gesture into binary numbers and back into vibrations blurs the boundary between human and machine, revealing the fluid nature of identity. We learn to experience sound through the lens of trans-humanism and science fiction as we investigate the agency of the interface itself.
Teerath Kumar Majumder, James Ilgenfritz
Teerath Majumder is a Bangladeshi composer and technologist who works in interactive and immersive media, computer music, and sound design. He questions conventional socio-sonic dynamics and reimagines relationships between participants through technological mediation. In 2022, he produced Space Within, where audience members collaborated with featured musicians to give rise to an hour-long sonic experience. His collaboration with Nicole Mitchell resulted in the immersive sound installation Mothership Calling (2021) at the Oakland Museum of California. He composed and designed sound for Qianru Li’s immersive multimedia piece A Shot in the Dark (2023) that explored Asian-American identity in the face of anti-Black police violence. His works have been performed by Hub New Music, Transient Canvas, and London Firebird Orchestra among other ensembles. Teerath holds a PhD in Music from the University of California, Irvine and is an Assistant Professor at Columbia College Chicago.
James Ilgenfritz is recognized in The New Yorker for his “characteristic magnanimity” and “invaluable contributions to New York’s new-music community,” and is founder and director of Infrequent Seams. He performs around the US, Europe, and Asia, and has twice had residencies at John Zorn’s The Stone. He has composed for Ghost Ensemble, the New Thread saxophone quartet, HUB New Music, The Momenta Quartet, Hypercube, String Noise, and Thomas Buckner. Recent albums include Altamirage (featuring Pauline Oliveros), and #entrainments (featuring drummer Gerry Hemingway). His solo albums, Origami Cosmos and Compositions (Braxton) 2011, feature music by Annie Gosfield, Miya Masaoka, Elliott Sharp, JG Thirlwell, and Anthony Braxton. James directed New York’s first Suzuki Bass program from 2009-2019, then left to earn his PhD in music from the University of California Irvine. Recent work reflects on experiences with Aphasia and other complications from two surgeries to remove benign brain tumors.
The Dysnomia duo
The Dysnomia duo (James Ilgenfritz and Teerath Majumder) emerged from a need to critique received notions about machine learning, intellectual labor, and conceptual autonomy. Using a 5-string contrabass in a Just Intonation scordatura together with transducers, and machine learning, the duo explores sound through the lens of trans-humanism, investigating the agency of the interface itself. Their hybrid mechanical and computational apparatus reveals an overlooked non-human agency as they become intricately linked by the hybrid mechanical-computational interface. Ilgenfritz activates the bass physically while Majumder uses microphones, transducers, and mediating software programs, including an artificial neural network (ANN) to play processed and synthesized sounds into the body of the bass using transducers. Dysnomia has presented their work at EMPAC Reembodied Sound 2024, Troy, NY, NIME 2024, Utrecht, and SEAMUS 2025, West Lafayette among other platforms.
Experiment in Augmentation 3
Experiment in Augmentation 3 features a human-machine improvisation consisting of a human improviser, digital instruments, and the musical robots PAM and percussion built by WPI’s Music, Perception, and Robotics Lab and EMMI. The virtual instruments and robots respond to human-produced cues with algorithmically-generated statements. Their performative idiosyncrasies transform idealized pitch, rhythm, and velocity information. The human performer nudges the machines in particular directions and pulls them back if they have become too adventurous. He indicates which gestures should persist, which should be recalled, and which should be developed further by the machines. The human is thus both composer and conductor as the music is shaped in real-time. Enabling human control of higher-level musical elements (i.e., meter, rhythmic subdivisions, pitch set) and machine control of lower-level ones (e.g., pitch, temporal position) allows the performer’s attention to shift and roam, and thus highlights a way in which human expressive abilities can be augmented via physical computing technologies. In this particular experiment, originally designed for an Algorave, an additional challenge is to see if the preceding can be made danceable.
Scott Barton
Scott Barton composes, performs, and produces (electro)(acoustic) music; conducts psychological research; and develops musical robots. His interests include rhythm, stylistic synthesis, perceptual organization, instrument design, machine expression, human-robot interaction, improvisation, creativity, and audio production. He founded and directs the Music, Perception, and Robotics lab at WPI and co-founded Expressive Machines Musical Instruments (EMMI), a collective that designs and builds robotic musical instruments. His work in robotics explores the novel expressive capabilities of machines and the ways in which robots can voice and inspire human creativity. His research in rhythm perception and production has been published in journals such as Music Perception and Acta Psychologica. He fuses the worlds of psychology and robotics in software that allows robots to improvise with humans. He is active in the world of audio production as a recordist, mixer, and producer. His most recent album Stylistic Alchemies (Ravello Records) features electroacoustic works that illuminate the creative potential of the studio in the synthesis and juxtaposition of musical genres. His compositions have been performed throughout the world including at SMC; ICMC; SEAMUS; CMMR and NIME. He is a Professor of Music with affiliate appointments in Robotics Engineering, Computer Science, and Psychology at Worcester Polytechnic Institute. scottbartion.info
Scott Barton
Scott Barton composes, performs, and produces (electro)(acoustic) music; conducts psychological research; and develops musical robots. His interests include rhythm, stylistic synthesis, perceptual organization, instrument design, machine expression, human-robot interaction, improvisation, creativity, and audio production. He founded and directs the Music, Perception, and Robotics lab at WPI and co-founded Expressive Machines Musical Instruments (EMMI), a collective that designs and builds robotic musical instruments. His work in robotics explores the novel expressive capabilities of machines and the ways in which robots can voice and inspire human creativity. His research in rhythm perception and production has been published in journals such as Music Perception and Acta Psychologica. He fuses the worlds of psychology and robotics in software that allows robots to improvise with humans. He is active in the world of audio production as a recordist, mixer, and producer. His most recent album Stylistic Alchemies (Ravello Records) features electroacoustic works that illuminate the creative potential of the studio in the synthesis and juxtaposition of musical genres. His compositions have been performed throughout the world including at SMC; ICMC; SEAMUS; CMMR and NIME. He is a Professor of Music with affiliate appointments in Robotics Engineering, Computer Science, and Psychology at Worcester Polytechnic Institute. scottbartion.info
Hostile Algorithmic Architecture Against Performative Predictability
This performance uses the concepts of hostile architecture in its relation to adversarial software architectures, and the improvisations the require as a basis to consider interventions that directly interfere with the intent of a musical performer. Specifically, it uses a combination of biosensors and audio analysis to live learn and make predictions on performative direction in order to punish the performer for predictable improvisations through electroshock. Live-learning systems will analyze performative style sound and movement through electromuscular, temporal, and spectral analysis. Meanwhile, the artist will try to build electronic noises, melodies, rhythms and textures that are performatively coherent, while simultaneously attempting to fight the algorithm to create only sound/motion patterns that are deemed novel. This set must balance the pain-based feedback of the electronic antagonist against a desire to perform particular structures and the interest in penal avoidance. This “man vs machine” performance brings the energy and frustration of orchestrating novel interactions around corporate and governmental interventions such has hostile architecture, and algorithmic profiling to the stage. The musician must find evasive improvisations to satisfy the self, the algorithm… and ultimately… …the audience.
Kevin Blackistone
Kevin Blackistone (US/AT) is a transdisciplinary media artist and researcher using immersive, tangible, participatory and performative elements as tools for exploratory engagements. His current focus investigates the networks of cross-interactions between our human organism(s), its habitats & inhabitants, and their technological interrelations from cultural, medical and ecological perspectives. His research background includes a BA in Intermedia and Digital Arts (US), post-bac research with the Laboratory of Neurogenetics (US), an MA in Interface Cultures (AT) and Postdigital Lutherie (AT). He has shown, performed, presented and exhibited works at festivals and venues including Mapping Festival (CH), Ars Electronica Festival (AT), Light City (US), Siggraph Asia, xCoAx, and Miraikan (JP). He presently works as an independent artistic researched and guest lecturer.
Kevin Blackistone
Kevin Blackistone (US/AT) is a transdisciplinary media artist and researcher using immersive, tangible, participatory and performative elements as tools for exploratory engagements. His current focus investigates the networks of cross-interactions between our human organism(s), its habitats & inhabitants, and their technological interrelations from cultural, medical and ecological perspectives. His research background includes a BA in Intermedia and Digital Arts (US), post-bac research with the Laboratory of Neurogenetics (US), an MA in Interface Cultures (AT) and Postdigital Lutherie (AT). He has shown, performed, presented and exhibited works at festivals and venues including Mapping Festival (CH), Ars Electronica Festival (AT), Light City (US), Siggraph Asia, xCoAx, and Miraikan (JP). He presently works as an independent artistic researched and guest lecturer.
Spirit of Light
Through electronic and physical media, I seek to expose, amplify and transform unique activity and subtle influences in matter, such as magnetic and electromagnetic activity. As the performer, I walk the line between ‘controller’ and ‘catalyzer’, having partial authority over the sound results but having to accept artifacts or surprises from the system, such as static discharges. This questions the agency of the performer vs. a slippery medium such as electromagnetics.
EM activity fascinates as a sound medium because of its permeability, the way that sources of energy affect and are affected by each other. In particular, EM sounds are rejected as noise in music, but they can be embraced: glitch and noise are effectively the voice of the machine. Extracting and organizing sound from magnetic sources allows a new insight into the behavior of energy and matter, as well as creates an elusive language for music.
Spirit of Light is a magnetic turntable machine which sonifies the movement of magnets and the topography of steel discs through inductive pickups, also capturing the kinetic and electrical activity of the whole machine. The title refers to the metaphysical and cross-sensory (synaesthetic or poetic) depictions of electricity in its early years.
Laser-cut steel discs are mounted on three spindles of the machine and the sound originates from both the patterns cut into the discs and permanent magnet arrays in motion. Two metal thundersheets add their own wild non-linear crashing behaviour.
The resulting sounds are haunting, hypnotic, pulsing and completely unique. The hybridization of the whole system creates a wonderful sonic complexity as the original signals pass through the physical resonances of metal. Controllable feedback and static discharges come through and I can play within a wide dynamic range.
Instead of data mapping or synthesis, focusing on ‘native’ means of sound production from analog sources gives a natural complexity and authenticity. My work was developed partly through several visits to CNMAT while David Wessel was still alive, and through collaboration with Adrian Freed. To quote David, ‘shape, don’t trigger’.
Alexis Emelianoff
Alexis Emelianoff is a sound artist based in Montréal, inventing and performing with acoustic and electronic instruments. Her approach is highly divergent and dependent on exploration in material research and physics; temperature changes, magnetism, and water conductivity have all animated installations. She designs systems to allow the ‘breakthrough’ of the electrical medium and its sonic form, whether periodic, arrhythmic, abrasive, obstinate… Current projects include hybrid sound output systems, and Persian-tuned Piano: re-tuning and playing the western piano in traditional Persian style.
She has performed in Canada, Europe and the United States, including the IEEE MMM Magnetics Conference, MuSA Symposium for Sonic Arts (Germany), Columbia University New York, the Banff Centre, and the Center for New Music and Audio Technologies (CNMAT) at UC Berkeley; and has published in Leonardo Music Journal. She was a member of the Topological Media Lab and the Balinese Gamelan group Giri Kedaton. Mentors and collaborators include Adrian Freed and Ramin Zoufonoun.
Currently Alexis is a member of the Center for Interdisciplinary Research in Music Media and Technology (CIRMMT) and was recently invited as the first artist in residence at the École de Technologie Supérieure in Montréal.
Alexis Emelianoff
Alexis Emelianoff is a sound artist based in Montréal, inventing and performing with acoustic and electronic instruments. Her approach is highly divergent and dependent on exploration in material research and physics; temperature changes, magnetism, and water conductivity have all animated installations. She designs systems to allow the ‘breakthrough’ of the electrical medium and its sonic form, whether periodic, arrhythmic, abrasive, obstinate… Current projects include hybrid sound output systems, and Persian-tuned Piano: re-tuning and playing the western piano in traditional Persian style.
She has performed in Canada, Europe and the United States, including the IEEE MMM Magnetics Conference, MuSA Symposium for Sonic Arts (Germany), Columbia University New York, the Banff Centre, and the Center for New Music and Audio Technologies (CNMAT) at UC Berkeley; and has published in Leonardo Music Journal. She was a member of the Topological Media Lab and the Balinese Gamelan group Giri Kedaton. Mentors and collaborators include Adrian Freed and Ramin Zoufonoun.
Currently Alexis is a member of the Center for Interdisciplinary Research in Music Media and Technology (CIRMMT) and was recently invited as the first artist in residence at the École de Technologie Supérieure in Montréal.
CLUB CONCERT #1
Monday, June 9; 10:00pm – 12:00pm
The 88 Club, Kings Dining & Entertainment, 50 Dalton St, Boston
ID
Title
Author(s)
Performers
Běda
English-language productions of Antonin Dvořák’s opera Rusalka face a conundrum regarding the translation of the Czech word “Běda,” which, set to a baroque sigh half-step motive, appears as a refrain throughout the opera. The word translates most directly to the English “alas,” which is unsuitable both for its vocal rhythm and for its archaic, almost humorous implications. Various productions have used the word “sorrow” or “woe,” neither of which captures the interjectory angst of the original. In my opinion, the word should remain untranslated – as in the title of this piece.
Many thanks to Aaron Burr, the leader of the consortium for this piece, and for all the consortium members who made this piece happen: Alexis Aguilar, Chris Dickhaus, Wade Dillingham, Sterling Fry, David Jones, Laurette Roddin, Rachel Wolz, and Ray Zepeda.
Jacob Frost
Jacob’s music is steeped in paradox and mystery, the horror and the ecstasy of human experiences of God. Described as “unpleasant but masterful” (Victor Zheng), his compositions wrestle improvisatory gestures from disparate musical traditions into ironic relationships within dramatic structures. Jacob has received commissions from organizations like Opera on Tap – Oklahoma City and the University of Oklahoma Theatre, as well as individual performers such as Aaron Burr and Ben Cooper. His music has been performed at national and international festivals like ICMC, New Music Gathering, NYCEMF, and MUSLAB. Jacob is a PhD Candidate at the University of Minnesota, where he studies with Sivan Cohen Elias. He earned his Master’s in Music Composition from the University of Oklahoma, where he studied with Marvin Lamb and Konstantinos Karathanasis, and his Bachelor of Arts in Music from Drury University, where he studied with Carlyle Sharpe. In addition to composing, Jacob maintains an active schedule as a teacher, performer, and arts administrator. He currently works as a graduate instructor at the University of Minnesota and serves as an administrative team member for the UMN New Music Ensemble.
Philipp Stäudlin, Hinge Quartet
Saxophonist Philipp A. Stäudlin has performed hundreds of concerts and premiered more than 100 works throughout North America, Europe, and Japan. A native of Friedrichshafen, Germany, Stäudlin has appeared as a soloist with the Sinfonieorchester Basel, Sound Icon, Ensemble White Rabbit, Niederrheinische Sinfoniker, Callithumpian Consort, Bielefelder Philharmoniker, Harvard-Radcliffe Collegium Musicum, Tufts University Orchestra, and The Providence Singers. Stäudlin has also performed with the Harvard Group for New Music, Equilibrium Ensemble, ECCE, Talea Ensemble, Steamboat Switzerland, Dinosaur Annex, Ludovico Ensemble, IGNM Basel, Alea III, Back Bay Choral, and many others. He has recordings available on the New World, Tzadik, Albany, Innova, Suspicious Motives, New Focus, Navona, Newport Classics, Enja, and Ars Musici labels. He teaches saxophone at The Boston Conservatory at Berklee in both the standard degree and contemporary music programs, and is also on the applied music faculty at Tufts University and the Massachusetts Institute of Technology (MIT).
Unfathomable
Every tall tale has an origin. A sailor glimpses a dark shape, half-formed in the murky depths below the docks. An unknown call, the splash of fins or tail, and then nothing but ripples on the ocean’s surface. It’s only natural that the imagination fills in the rest. Unfathomable explores the formation of a sea monster myth, as a brush with an unknown creature spins into something darker and stranger. Where does reality end, and where does myth begin?
Nikki Krumwiede
Nikki Krumwiede, DMA is a composer and improviser from Moore, Oklahoma. She earned her DMA in Composition in 2022 from the University of Oklahoma where she served as a graduate assistant to the composition studio and directed the New Century improv! Ensemble. She composes in a variety of styles, from contemporary classical to experimental, electronic, and improvisational music.
Nikki’s goal is to create music that is engaging for performers and allows for flexibility and interpretation. Much of her music draws upon her experience as an improv performer and asks musicians to create along with her, whether through improvisation, selection of unspecified pitch, or a flexible rhythmic structure. She also incorporates her background in writing and literature and her love of nature and folklore into her compositional process in a way that is engaging to a diverse audience.
Philipp Stäudlin, Hinge Quartet
Saxophonist Philipp A. Stäudlin has performed hundreds of concerts and premiered more than 100 works throughout North America, Europe, and Japan. A native of Friedrichshafen, Germany, Stäudlin has appeared as a soloist with the Sinfonieorchester Basel, Sound Icon, Ensemble White Rabbit, Niederrheinische Sinfoniker, Callithumpian Consort, Bielefelder Philharmoniker, Harvard-Radcliffe Collegium Musicum, Tufts University Orchestra, and The Providence Singers. Stäudlin has also performed with the Harvard Group for New Music, Equilibrium Ensemble, ECCE, Talea Ensemble, Steamboat Switzerland, Dinosaur Annex, Ludovico Ensemble, IGNM Basel, Alea III, Back Bay Choral, and many others. He has recordings available on the New World, Tzadik, Albany, Innova, Suspicious Motives, New Focus, Navona, Newport Classics, Enja, and Ars Musici labels. He teaches saxophone at The Boston Conservatory at Berklee in both the standard degree and contemporary music programs, and is also on the applied music faculty at Tufts University and the Massachusetts Institute of Technology (MIT).
Ecstasies
Ecstasies represents my most ambitious attempt to synthesize the Dionysian soundscapes of electronic dance music with the technical innovations of contemporary Western art music and the distinctive expressive qualities of Iranian classical music. The meaning of the title is threefold: the feeling of ecstasy evoked by EDM, the transcendent and ecstatic character of Dastgah Nava (the mode of Iranian classical music used throughout the piece), and the rave drug Ecstasy.
The structure is a microcosm of a DJ set at a rave: a series of buildups and climaxes, exploring various grooves and genres while continuously growing in intensity. The flutist has the Herculean task of matching the dynamism of the electronics while shapeshifting between vastly different manners of playing.
Kian Ravaei
Composer Kian Ravaei (b. 1999) takes tone painting to a new level, synthesizing diverse inspirations into evocative musical portraits. Whether he is composing a string quartet inspired by wonders of the natural world, electronic music that evokes the pulsating energy of late-night dance clubs, or a symphonic poem that draws from the Iranian music of his ancestral heritage, he takes listeners on a spellbinding tour of humanity’s most deeply felt emotions.
Ravaei has collaborated with sought-after artists such as pianist and cultural activist Lara Downes, Grammy-nominated violinist Tessa Lark, and New York Philharmonic clarinetist Anthony McGill. Chamber musicians have championed his works, leading to commissions from Chamber Music Northwest—where he served as a Protégé Project Composer-in- Residence—as well as Seattle Chamber Music Society and Great Lakes Chamber Music Festival. His rapidly expanding catalog has earned him notable honors such as a Copland House CULTIVATE Fellowship, a Los Angeles Chamber Orchestra Composer Teaching Artist Fellowship, a New Music USA Creator Fund Award, and a Barlow Endowment Commission.
Born to Iranian immigrants, Ravaei maintains close ties to the Iranian community in his hometown of Los Angeles. Many of his works combine the ornamented melodies of Iranian classical music with the colorful harmonies of Western classical music. DJs know Ravaei as the go-to person for creating orchestral versions of dance songs. His orchestration of Wooli & Codeko’s “Crazy (feat. Casey Cook)” has garnered over one hundred thousand plays across streaming platforms. It is no coincidence that many of Ravaei’s concert works contain a rhythmic vitality that evokes the energy of the dance floor.
Ravaei counts celebrated composers Valerie Coleman, Richard Danielpour, and Derrick Skye among his teachers, and holds degrees in composition from UCLA and Indiana University. He is currently a C.V. Starr Doctoral Fellow at The Juilliard School.
Rachel Beetz of the Callithumpian Consort
Fantasy for Vagrant Flute
This is the first piece of a series of etudes for Somax2 and acoustic instruments. The acoustic part here is created based on Dong Zhou’s long-term project “Found Violin”.
Timothy McDunn
Timothy W. McDunn (b. 1994) is a composer and theorist with an international profile. He specializes in just intonation and electroacoustic composition. His music and research is regularly presented and performed at major peer-reviewed conferences and festivals including the Society for Electroacoustic Music in the United States National Conference, the New York City Electroacoustic Music Festival, the International Computer Music Conference and others. His background in classical languages and literature strongly influences his work as a composer. He holds a DMA in Composition from the University of Illinois at Urbana Champaign and a Biennio Degree in Composition from the Verdi Conservatory of Milan. His education also includes a Bachelor of Music in Composition and a Bachelor of Arts in Greek and Roman Studies. McDunn currently teaches music theory and composition full-time at Wheaton College. He resides in Glendale Heights with his wife, Jasmine—the most selfless, supportive, and objectively beautiful woman in the world. His music is influenced by elements of philosophy, faith, and spirituality.
Rachel Beetz of the Callithumpian Consort
Four Conversations for Water
TIEVS (Trackpad Integration for Expressive Visual Scores) is a performance patcher for a laptop trackpad created in MaxMSP used to explore radical timbral textures. The patcher uses Boolean logic to divide the screen into four distinct quadrants, each containing their own audio and sound design elements. In the exploration of more aggressive timbres, I developed a piece that utilizes improvisation within a set collection of sounds evoking rushing water amongst a coastal backdrop. Sound design for this piece was set up around the different types of sounds one would experience in a coastal location, from the longer drawn out sine- waves-as-foghorn to the differing textures to represent different velocities of moving water. The visual loops were added to emphasize the aesthetic of the audio, however the TIEVS visual interface allows the performer several options for influencing their sounds, from tracing the moving shapes for the gestural motions during improvisations, to creating a composition using the videos as the score.
Keith Wecker
Keith Arthur Wecker, (b. 1983) is a composer, improvisor, and multi-instrumentalist using technology to explore the timbral and temporal properties of granular synthesis in the creation of electroacoustic music. Keith puts a focus on bringing the in-between sounds to the forefront through a layering intensive process, creating rich sonic textures that evoke a meditative sonic environment for the listener to immerse themselves in.
His personal approach to sound creation is highlighted by the diversity of the acts he has shared the stage with (Tim Hecker, Clark, Sumac, Thundercat, Wolf Eyes, Six Organs of Admittance, Goblin, Lori Goldston and Sun Araw) as well as participating in projects of Anthony Braxton and Glenn Branca. Keith has had his music featured across North America, including multiple appearances at the Vancouver International Jazz Festival, New Forms Festival, MOXSonic, VU Symposium and Improvisors Summit PDX.
Keith Wecker
Anamorphism
Anamorphism is a piece composed for electric guitar and multi-channel electronics, inspired by the classic Chinese pop song Congratulations! from the Republic of China. The original song expressed the joy of surviving war and the confusion about an uncertain future. I digitally deconstructed and reconstructed its melody, creating a strange and abstract sound world. Traditional emotional expressions are compressed into fragments shaped by my technology, transforming into an abstract experience of nostalgia.
The term Anamorphism in the title has two layers of meaning, corresponding to both the technical and musical aspects of the work:
1. In computer science, Anamorphism refers to a recursive pattern that generates complex data structures from basic elements, such as lists or tree structures.
2. In geology, Anamorphism refers to the process in which rocks undergo recrystallization or chemical changes under high temperature and pressure, causing the rearrangement of minerals or the formation of new compositions.
On the technical level, the harmonic structure of the work is based on a database of over 1,600 commonly used guitar chords within the tonal system. I developed a chord stability calculation algorithm using Marco Stroppa’s VPS theory (Vertical Pitch Structure) to calculate the stability values of all chords. I also applied an automatic harmony writing program that I developed to reconstruct the deconstructed musical material. The architecture of this program mirrors Anamorphism, recursively generating new harmonies. These algorithms reassemble familiar sounds, presenting them in an alienated, mechanical way. In the end, the original melody is stripped of its emotional content, reduced to sounds driven purely by data.
Musically, I designed special spatialization effects for the electronic music: the harmonies move in a circular sound space in a clockwise direction, following a specific cyclical pattern (see performance instructions in the score). The harmonic flow is extremely slow and blurred, while the electric guitar gradually detaches from its original role as accompaniment, moving beyond the fixed pattern. This musical process symbolizes the chemical changes that occur in the anamorphic transformation of rocks. Ultimately, the electric guitar clearly plays the original melody, offering a subtle reminder of the piece’s source material.
Through this context, I aim to depict a sense of blurred, dreamlike, and unconscious nostalgia. In this process, I used algorithms to transform abstract emotions into sound symbols that feel familiar yet unreachable. The past and present intertwine in this piece, forming a nostalgia unique to the digital age—a kind of nostalgia that cannot be truly grasped, but only reflected and lost within layers of sound.
Hangzhong Lian
Hangzhong Lian (b. 2002, China) is a composer whose work explores the intersection of cognitive psychology, Gestalt theory, film montage, and artificial intelligence. He is currently pursuing a Master’s degree at Boston Conservatory at Berklee, studying with Dr. Dan VanHassel.
Lian’s music integrates algorithmic processes with perceptual theory, often realized through program tools he developed in Lisp for OpenMusic, including libraries for harmony generation, chord analysis, and gesture organization.
His works have received awards such as second prize at the MaestrosVision Award and second prize at the Ivy International Grand Prize of Music. His music has been featured at festivals including the Electronic Music Midwest Festival, NYCEMF, and ICMC, and performed by ensembles such as the Barcelona Modern Ensemble and the Boston Conservatory Composers Orchestra. Recent projects include a string quartet for the Mivos Quartet and a miniature for the Mixtur Festival.
He previously studied composition with Dr. Jian Liu and computer music with Jialin Liu, and has participated in masterclasses or private lessons with Lei Liang, Octavi Rumbau, Dongryul Lee, Juri Seo, among others. He has presented research at conferences hosted by the Central Conservatory of Music, the China Conservatory of Music, and others.
Dan VanHassel of the Hinge Quartet
EDO Artifacts
EDO Artifacts is a live-performance piece for a computer-sequenced modular synthesizer to explore equal division of the octave (EDO) tuning systems. The composition is written prior to performance using the ChucK music programming language. During the performance, a computer running this program translates note sequences for each voice into a stream of numbers representing frequencies and durations. This data is sent via USB to an Expert Sleepers ES-8 module within the synthesizer. Acting as a bridge between the computer and the synthesizer, the ES-8 converts the incoming number stream into voltage—the “language” of modular synthesis. Its outputs are then routed to analog oscillators and amplifiers, which generate the actual sound. The flexibility offered by the programmatic composition supports complex arrangements of phrases, sections, and modulating tuning systems, while the modular synthesizer provides the performer precise control over the sound through real-time manipulation of timbre-shaping parameters. In this way, ChucK acts as the “orchestration” or the “brain”, whereas the modular synthesizer is the “instrument” or the “body” of the piece. There are four sections—labeled as fragments—each written using a different EDO tuning: 5EDO, 7EDO, 31EDO, and 15EDO, respectively. Each tuning has been selected to suit the stylistic and textural qualities of its respective fragment, shaping both the compositional approach and the resulting sonic character. The fragments are purposefully brief, serving as previews of the musical potential of each tuning.
Gregg Oliva
Gregg Oliva is a musician and engineer currently pursuing a master’s degree at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University. In 2019, he graduated from the University of California, Berkeley with a bachelor’s degree in Electrical Engineering and Computer Science and a minor in Music. After graduation, he worked as a software engineer specializing in cloud and data infrastructure before ultimately leaving in 2023 to focus his efforts on music.
Gregg’s interests span electronic composition, digital instrument design, interactive systems and games, modular synthesis, and microtonality. He enjoys creating expressive musical tools and extracting unlistenable sounds from his Eurorack. He is obsessed with all things analog and frequently dreams about new gear he can’t afford.
Gregg Oliva
The World Which Has Become Objectified
“The spectacle cannot be understood as the abuse of a world of vision, as the product of the techniques of mass dissemination of images. It is, rather, a Weltanschauung which has become actual, materially translated. It is a vision of the world which has become objectified.” – Guy Debord
The World Which Has Become Objectified is a live-coded composition created in SuperCollider, engaging with Guy Debord’s theory of the spectacle. In The Society of the Spectacle, Debord proposes that the spectacle is not simply a visual overflow, but a worldview that has been materially enacted—a structure that reorganizes perception and renders reality increasingly abstract, recursive, and systemic. This piece responds to that proposition through sound, treating code not as a vehicle for expression, but as a compositional condition through which mediation becomes audible.
The work unfolds within a custom-built system of algorithmic behaviors and layered temporalities. Rather than following a linear trajectory, the piece is shaped by repetition, instability, and accumulation. Sound behaves less as a discrete object than as a presence—emerging, receding, and reconfiguring itself within a fragile equilibrium of process and decay. These sonic fields are dense, unstable, and often resistant to interpretation. The system generates change, but never guarantees clarity.
Live coding introduces volatility, allowing the performer to inhabit the system reflexively. Each gesture interacts with autonomous operations already in motion, producing outcomes that are only partially predictable. Code becomes both score and environment, performance and infrastructure. The act of listening shifts accordingly—from reception to negotiation, from decoding to endurance.
The piece does not seek to represent the spectacle but to inhabit its acoustic counterpart: a space where objectification is not visual but sonic, procedural, and embodied—where hearing becomes a negotiation with the systemic. Within this mediated sound space, perception is shaped by logic, automation, and drift. The spectacle becomes audible not through symbol, but through structure.
Diego Peralta
Diego Peralta-Gonzales (he/they, b. 2000) is a Peruvian composer and sound artist whose work engages with sonic materiality, technological systems, and cultural memory. His practice draws from experimental music, critical theory, and Latin American epistemologies to investigate how sound constructs and mediates time, history, and perception. He studied at the National University of Music of Peru and holds a Bachelor’s degree in composition from the Boston Conservatory at Berklee.
Peralta-Gonzales creates durational sound environments shaped by algorithmic procedures, spatial diffusion, and feedback ecologies. His works prioritize instability, slowness, and density—foregrounding texture over gesture, and process over resolution. He uses technology as a compositional structure, developing systems that generate emergent forms, nonlinear behaviors, and dynamic sonic topologies.
As a performer, Peralta-Gonzales brings these same principles into real-time contexts. His performances of ambient and noise-oriented music integrate analog-digital hybrids, system feedback, and environmental responsiveness.
His research addresses the relationship between sound and memory, with particular attention to Andean cosmologies, testimonial media, and the sonic legacies of political violence. Archival materials, such as interviews, field recordings, and documentary fragments, are re-situated through time-based media and spatial audio practices to produce affective and historically engaged listening environments. This approach reflects a commitment to situated knowledge and decolonial critique.
His music has been featured at the SPLICE Festival, SICPP, Divergent Studio, and more. He has collaborated with ensembles including Splinter Reeds, the Momenta Quartet, Divergent Quartet, and pianist Yundi Xu, and has developed interdisciplinary projects with MassArt and the Berklee Interdisciplinary Arts Institute.
Diego Peralta-Gonzales, live coding
Goldfish Variations
Goldfish Variations (2023) was born out of my explorations of new methods for performing with and controlling Eurorack modular synthesizers and turning them into – what I hope – are cohesive, musically expressive, and interesting instruments for performance and composition. Commissioned by Dr. Donn Schaefer (bass trombone) of the University of Utah, I originally composed a piece for bass trombone and electronics (my new instrument for musical expression – Curve, and Eurorack modular synthesizer). I have since written different variations on that original piece, where each new variation utilizes a different external, traditional musical instrument. Incorporating said external/traditional instrument (the baritone saxophone in this case) into my synthesizer’s signal flow has always been of particular interest to me due to the myriad of ways I can process and fuse that signal with the synthesizer’s signal, effectively transforming the external instrument into another “module” or facet of the synthesizer itself in addition to being a traditional instrument in its own right. This fusion of external/traditional instrument with my electronics has provided me (us) with a myriad of musical and performative possibilities that we can then explore together in our compositions and performances.
Nathan Asman
Dr. Nathan M. Asman is a musician, composer, synthesist/sound designer, instrument designer, digital artist, and educator. His musical and artistic endeavors reside mainly within the electronic realm, where he specializes in data-driven instrument creation, synthesis & sound design, and electroacoustic music composition. Focusing on the intersection of popular and academic art music, he strives to unite the two musical styles utilizing the endless musical and artistic opportunities afforded him by the worlds of synthesis, music technology, and electronically-generated music. Nathan creates new and original music, performances, and instruments from the ground up by employing innovative and alternative instrument, sound, and synthesis designs in his compositions and performances. His goal is to apply his knowledge and skills to further the fields of music technology and digital art by producing music that can be appreciated by both expert and casual listeners. Strengthening the awareness, enjoyment, and importance of music, art, and technology within the academic community is paramount to his work, but cultivating and bolstering those sentiments outside of the academy is absolutely essential, and permeates his work on every level.
Nathan’s works have been seen and heard around the country and abroad, including the Guthman Musical Instrument Competition (3-time finalist & semi-finalist), NIME, ICMC, SEAMUS, NYCEMF, the VU Symposium, the Sundance Film Festival, TEDTalks, Future Music Oregon, 60×60, (SUB)Urban Projections, the PLATFORM Festival, the Human Nature Festival, and the Kaleidoscope Music Festival. Nathan is Assistant Professor of Audio Arts at the State University of New York at Oneonta (SUNY Oneonta) in Oneonta, NY as of fall 2020. He holds a D.M.A. in Data-Driven Instruments and an M.M. in Intermedia Music Technology from the University of Oregon, and a B.A. in Music History from Denison University
Nathan Asman, Curve, Eurorack Modular Synthesizer; Andris Balins, baritone saxophone
Dr. Nathan M. Asman is a musician, composer, synthesist/sound designer, instrument designer, digital artist, and educator. His musical and artistic endeavors reside mainly within the electronic realm, where he specializes in data-driven instrument creation, synthesis & sound design, and electroacoustic music composition. Focusing on the intersection of popular and academic art music, he strives to unite the two musical styles utilizing the endless musical and artistic opportunities afforded him by the worlds of synthesis, music technology, and electronically-generated music. Nathan creates new and original music, performances, and instruments from the ground up by employing innovative and alternative instrument, sound, and synthesis designs in his compositions and performances. His goal is to apply his knowledge and skills to further the fields of music technology and digital art by producing music that can be appreciated by both expert and casual listeners. Strengthening the awareness, enjoyment, and importance of music, art, and technology within the academic community is paramount to his work, but cultivating and bolstering those sentiments outside of the academy is absolutely essential, and permeates his work on every level.
Nathan’s works have been seen and heard around the country and abroad, including the Guthman Musical Instrument Competition (3-time finalist & semi-finalist), NIME, ICMC, SEAMUS, NYCEMF, the VU Symposium, the Sundance Film Festival, TEDTalks, Future Music Oregon, 60×60, (SUB)Urban Projections, the PLATFORM Festival, the Human Nature Festival, and the Kaleidoscope Music Festival. Nathan is Assistant Professor of Audio Arts at the State University of New York at Oneonta (SUNY Oneonta) in Oneonta, NY as of fall 2020. He holds a D.M.A. in Data-Driven Instruments and an M.M. in Intermedia Music Technology from the University of Oregon, and a B.A. in Music History from Denison University
Andris Balins has been working professionally in the audio arts for the past 18 years. In that time, he has worked in the studio with numerous artists including Nels Cline, Lana Del Ray, and Sean Lennon. Andris holds a BA in Music and German from Hartwick College (2003) and an MA in Museum Studies from the Cooperstown Graduate Program (2021).
CLUB CONCERT #2
Tuesday, June 10; 10:00pm – 12:00pm
The 88 Club, Kings Dining & Entertainment, 50 Dalton St, Boston
ID
Title
Author(s)
Performers
Strom
Strom is a real-time audio-visual system controlled by a performer with six high-resolution analog sensors. The image is generated with feedback in GLSL shaders from the output of controller values and a single low frequency oscillator. The audio signals are streams of values sequentially indexed from color channels of the image matrix. Strom is designed and realized in MaxMSPJitter.
Michael Blandino
Michael Blandino offers digital art music from Baton Rouge, LA where he serves as Assistant Dean of the Ogden Honors College at Louisiana State University. He completed his doctorate in Experimental Music and Digital Media at LSU, where he studied with Stephen David Beck, Edgar Berdahl, Jesse Allison, and Derick Ostrenko. His undergraduate degree in Philosophy and Master’s degree in Music Theory were also taken at LSU. Blandino’s works have been shown at the New York City Electroacoustic Music Festival (NYCEMF), International Computer Music Conference (ICMC in Ireland, South Korea, and NYC), Electronic Music Midwest (EMM) festival, the Ebb and Flow Festival (Baton Rouge), the New Orleans Film Festival, the Audio Mostly conference (Milan), and within a supplement to the CSound Book (MIT). Active in experimental music research, he has contributed to the study of human control of continuous analog sensors, of meaning and environmental impacts of digital art and music performance, and of the auditory display of environmental risk information in augmented reality.
Michael Blandino
Michael Blandino offers digital art music from Baton Rouge, LA where he serves as Assistant Dean of the Ogden Honors College at Louisiana State University. He completed his doctorate in Experimental Music and Digital Media at LSU, where he studied with Stephen David Beck, Edgar Berdahl, Jesse Allison, and Derick Ostrenko. His undergraduate degree in Philosophy and Master’s degree in Music Theory were also taken at LSU. Blandino’s works have been shown at the New York City Electroacoustic Music Festival (NYCEMF), International Computer Music Conference (ICMC in Ireland, South Korea, and NYC), Electronic Music Midwest (EMM) festival, the Ebb and Flow Festival (Baton Rouge), the New Orleans Film Festival, the Audio Mostly conference (Milan), and within a supplement to the CSound Book (MIT). Active in experimental music research, he has contributed to the study of human control of continuous analog sensors, of meaning and environmental impacts of digital art and music performance, and of the auditory display of environmental risk information in augmented reality.
Foreign Earth
Our earth may be foreign, but we too have fruit, we too have narwhals, and we too have helmets. We communicate through our soundboards and regurgitators, and this is a letter to you.
Michele Cheng, Hassan Estakhrian
Michele Cheng is an interdisciplinary composer, multi-instrumentalist, improviser, and puppeteer intertwining music, visuals, and theatre to engage with social issues and cultural identities. She builds custom instruments and puppets and collaborates with artists across cultural practices. She has received commissions from the JACK Quartet, National Sawdust, I Care If You Listen; grant from New Music USA; and scholarship from Atlantic Center for the Arts. Her works have been featured at Roulette (US), MATA Festival (US), CCRMA (US), NYCEMF (US), LMCML (Canada), ICMC (Chile), Espacios (Argentina), ISSTA (Ireland), Sonorities (UK), Musée des Beaux- Arts de Dijon (France), 33OC (Italy), YCM (Netherlands), AMKL (Poland), NTCH (Taiwan), SICMF (South Korea), TMAO (Thailand), among others. She is a co-founder of the intermedia duo Meoark and fff, an improv collective led by feminist media artists.
Hassan Estakhrian is a cross-disciplinary musician, intermedia storyteller and audio engineer. He is the musical director of Antenna Fuzz and co-creator of intermedia duo Meoark. Hassan composes, performs on multiple instruments, produces records, and works in immersive audio and virtual acoustics. He is a postdoctoral scholar and lecturer at Stanford University’s Center for Computer Research in Music and Acoustics. More at antennafuzz.com
Meoark (Michele Cheng and Hassan Estakhrian); toys, objects, electric bass, voices, live processing
Meoark (Michele Cheng and Hassan Estakhrian) is an intermedia duo. Based around their practice as composers, improvisers, performers, puppeteers, technologists, and storytellers, the duo tells intermedia stories using custom instruments and audio effects, self-built puppets, visuals/projection, and interactive technology. Their work draws inspiration from different genres and cultures of music including funk, rock, experimental, electronic, and film music. They design and build original puppets– from hand puppets, full-body sized puppets, to mask characters.
The duo has performed at Sonorities Festival in Belfast, Int-act Festival in Bangkok, New Music Gathering in Portland, Oregon, Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University, Harvard University, New York University, and the University of Guelph.
What Sleeps Beneath
“Long ago, along the Wisconsin shoreline, a mother bear and her two cubs were driven into Lake Michigan by a raging forest fire. The bears swam for many hours, but soon the cubs tired. Mother bear reached the shore first and climbed to the top of a high bluff to watch and wait for her cubs. The cubs drowned within sight of the shore. The Great Spirit created two islands to mark the spot where the cubs disappeared and then created a solitary dune to represent the eternal vigil of mother bear.”
– Anishinaabe creation myth for the Sleeping Bear Dunes
What Sleeps Beneath is inspired by the Anishinaabe creation myth for the Sleeping Bear Dunes National Lakeshore and the Manitou Islands in Lake Michigan (USA). It is composed entirely of sound sources recorded in the field at the Sleeping Bear Dunes as part of an artist residency at the Glen Arbor Arts Association. Sound sources include: Lakeshore soundscapes (waves, foliage, rocks and sediment, various species of fauna, grasslands), antique metallurgy or mechanisms found within historic lakeshore farmhouses, and various fire starting implements (matches, campfires, torches, etc).
Kramer Elwell
Kramer Elwell (b. 1990, USA) is a composer, sound artist, and researcher currently based in Worcester, MA. His acoustic and electroacoustic works invoke massive, timbre-rich spaces, spin cryptic and surrealist narratives, and engage with atypical performance practices. His research explores audio drama, graphic and multimedia notation, improvisation, human-computer interaction, installation art, networked performance, interdisciplinary collaboration, soundscape ecologies, and musicological investigations of electroacoustic music.
Kramer’s works have been heard internationally at such conferences as: Festival l’Espace du Son, The International Computer Music Conference, the Daegu International Computer Music Festival, Simpósio Internacional de Música Nova, The New York City Electroacoustic Music Festival, The SEAMUS National Conference, Electronic Music Midwest, and many more. Several of his works have also received honors in competitions such as the Metamorphoses International Acousmatic Composition Competition, the Musica Nova International Competition, and Citta’ di Udine International Competition. He has also been an Artist-in-Residence with such organizations as: The Glen Arbor Arts Association, the Atlantic Center for the Arts, and the Kimmel Harding Nelson Center for the Arts.
Kramer is currently an adjunct professor of sound design at Worcester Polytechnic Institute. He holds a PhD in Music Composition and a Master of Science in Media Arts and Technology from the University of California Santa Barbara- having studied with Clarence Barlow, João Pedro Oliveira, Curtis Roads, Andrew Tholl, and Karl Yerkes. He also holds a Master of Music degree in music composition from the University of Texas at Austin and two Bachelor of Music degrees from Western Washington University.
Selva Eléctrica
Selva Eléctrica is a piece that sonically reconstructs a memory of the Bolivian Amazon rainforest. Each sound element has been carefully designed to evoke different aspects of the jungle environment, translating its textures, rhythms, and atmospheres into an auditory experience that reinterprets memory through sound.
Mizky Bernal Miranda
Mizky Bernal Miranda (Bolivia, 1991) is a composer, sound artist and improviser. Her practice lies at the intersection of contemporary music, sound art, and performance, with a focus on expanded listening, territory, and the critical use of technology. She holds a Bachelor’s degree in Music and a Master’s in Sound Art from the University of Barcelona.
She is a founding member of Ensamble Inmediato, a collective dedicated to the creation and circulation of experimental works in Bolivia and the region. Her work has been featured at international platforms such as the Sonandes Biennial, Tsonami, Women in New Music Festival, Latentes Iberoamericanas/+s, the Bolivian Contemporary Music Festival, and the Contemporary Music Sessions, among others. Her music has been broadcast on stations such as Radio CASO (Colombia), Frrapó (Germany), and CKWU 95.9FM (Canada). She has collaborated with ensembles including Low Frequency Trio, Quartetto Maurice, OEIN, OSN, Ensamble UL, and Ensamble CG, among others. In 2025 she was selected as an active composer for three major international programs: the 5th International Workshop for Young Composers AntiquaNova (Argentina), the International Computer Music Conference – ICMC 2025 (Boston, USA), and the Darmstadt Summer Course (Germany). She is currently developing commissioned projects across Latin America and Europe, exploring the intersections between composition, sound technology, and experimental music.
Transhuman
As technology improves at a rapid rate, promising a change in the way of living and even the essence of what is human (for the human species to be able to catch up with artificial intelligence), it is now more than necessary to ask if as a species are we ready for a radical and possibly an irreversible jump?
The Transhuman trilogy, consisting of three electroacoustic metal hybrid pieces, focuses on finding certain methods to create balanced hybrid pieces with the aesthetic of dystopian futurism. The main aim behind this form of hybridity is to create hybrid pieces which have aggressiveness, heaviness, and darkness of metal mixed with endless timbral possibilities, spectromorphological thinking, and abstractness of electroacoustic music.
Transhuman being the first of the trilogy of pieces, this piece is all about experiencing the singularity process when every single consciousness in the world is mixed with the artificial consciousness. The line gets blurred and everything becomes abstract. After this point, there is no turning back and the world we know is no longer the same.
Berk Yagli
Berk Yağlı (born 1999) is a Cypriot guitarist, composer, and producer. His mission with his music has been to talk about social, political, and philosophical matters interestingly to invite the listeners into reflecting on the topics. He has been active in the UK since 2017. He studied Music and Sound Technology (University of Portsmouth), Masters in Composition (University of Sheffield), and currently at the University of the Arts London working under Adam Stanovic for his Ph.D. topic hybridity between metal and electroacoustic music. His works have been presented internationally including Argentina (Salta), UK (Leicester, Plymouth, Sheffield, London, Staffordshire), US (New York City, Indianapolis, Georgia, Utah, Kansas City, Missouri), Taiwan (Taipei), South Korea (Seoul), Poland (Krakow), Switzerland (Zurich), Ireland (Limerick), Italy (Padova), Mexico (Morelia), Austria (Linz), Australia (Sydney), China (Shenzhen) and more. He is regularly invited to compose in studios including VICC (Visby, Sweden), CMMAS (Morelia, Mexico), ACA-Atlantic Center for the Arts (Florida, USA), EMS (Stockholm, Sweden), NOVARS Research Center (Manchester United Kingdom),Mediawave International Film Festival (Gyor, Hungary) and Studio Kura (Fukuoka, Japan). He won numerous awards for his compositions in international music competitions including Musica Nova (Prague, Czech Republic), IYMC (Atlanta, US), Penn State International Call for Scores (Pennsylvania, US), WOCMAT (Taipei, Taiwan), ULJUS (Smederevo, Serbia), Erik Satie International Music Competition, and more.
2 Compositions in 23-edo
These are 2 compositions in 23-equal divisions of the octave tuning. They explore some harmonic corners of the 23-edo universe that I discovered. The instruments in this piece are the SURGE virtual synth, and the Pianoteq piano physical model. Both allow, fairly easily, for microtonal tunings. The first piece is very free in terms of timing and rhythm especially, and ends up being a sort of piano concerto for pianoteq and surge. Pianoteq drops out for the second piece, which operates fairly strictly within a rhythmic grid that proceeds sometimes fast, sometimes slow, building, finally, up to a breathlessly frenetic conclusion.
Christopher Bailey
Born outside of Philadelphia, PA, Christopher Bailey turned to music composition in his late ‘teens, and pursued studies at the Eastman School of Music, and later at Columbia University. He is currently based in Boston, and frequently participates in musical events in New York City. Composition for him has always been a personal, inward-looking endeavour, and his music tends to be about color, sound, fantasy, gesture and emotion. He composes in a wide variety of styles and media. His albums (most recently Waltz, Rain Infinity and Harvest Kitchen) feature chamber music and electronic works and are available on the New Focus Recordings label on all the major streaming and downloadable sources.
Cubiculum Soni
Commissioned for the City is Full of Noise 2024 festival, cubiculum soni is both an homage to and a transformation of the Herbert Art Gallery & Museum. This composition does not seek to describe the gallery, nor to tell its story outright. Instead, it listens. It listens deeply to the creaks of the floorboards, the hum of the climate control, the distant echo of footsteps, the murmur of visitors. These fragments — these incidental artefacts — become the raw material of the piece.
The title, cubiculum soni (“sound chamber”), evokes a spatial metaphor: each moment of the work opens a new chamber, a new room of sound. The listener is invited to move through these sonic spaces as though walking through the gallery itself — not guided by curatorial signage, but by texture, resonance, and memory.
There is no linear journey here. Instead, the piece unfolds as a shifting architecture of sound — immersive, abstract, yet uncannily familiar. Sometimes the space feels close, intimate, even claustrophobic; at other times, it expands into reverberant vaults of sound. The listener is both visitor and participant, constructing a personal and ephemeral version of the gallery through auditory experience alone.
In an era saturated with images, cubiculum soni proposes a different form of portraiture: one made of echoes, atmospheres, and absence. It asks not what a place looks like, but what it sounds like when no one is watching.
Benoit Granier
Benoit Granier, born in 1974 in Nantes, France, is a Professor, Composer, (ethno) Musicologist, Visual and sonic artist, Philosopher, and a lifelong supporter of collaborative works in the arts. After earning a PhD in computer music and composition from Trinity College Dublin in Ireland, he relocated to Beijing, China, to teach at the Central Conservatory of Music, and forged strong ties between China and other countries, primarily those in Europe and the United States. In 2016, he returned from China to take up the position of Course Director in Music Technology at Coventry University’s School of Media and Performing Arts.
Dr. Granier’s use of traditional instruments in modern electro-acoustic music is one of his compositional strengths (Asian, Irish, Celtic instruments…). He is also well-known for his compositions for “controlled improvisation,” which provide performing musicians with a wide range of options.
Ursonate (reVisited) - Themes and Variations
Ursonate (reVisited) – Themes and Variations, pays homage to the long history and influences of contemporary experimental sound art. This sound poem “Ursonate”, which translated means “Primal Sonata” by German Dada artist Kurt Schwitters (1922), contains five themes that I have performed, recorded, digitally processed and mixed to be presented in stereo. These five themes are made up of nonsensical vocal utterances. True to the nature of Dada!
Jean-Paul Perrotte
Jean-Paul Perrotte is an American composer of French and Ecuadorian descent whose work includes compositions for electronics, acoustic instruments, voice, video, dancers, and improvisation using Max/MSP. His works have been performed internationally and presented in prestigious art galleries like the Bemis Center for Contemporary Arts in Omaha, Nebraska. Dr. Perrotte has also co-written a chapter with Dr. Van Hoesen titled Sound Art – New Only in Name: A Selected History of German Sound Works from the Last Century from the edited volume Germany in the Loud Twentieth Century. Dr. Perrotte received his Ph.D. in Composition from the University of Iowa in 2013. He is currently Associate Professor of Composition and is the Director of the ElectroAcoustic Composition Laboratory at the University of Nevada, Reno. His scores are available for purchase at https://www.babelscores.com/JeanPaulPerrotte and his latest release on Ravello Records, “Jean-Paul Perrotte a Collection of Works,” is available on all major streaming services.
Scam Likely
There are four serious allegations pressed on your name, and once they get expired after that you will be taken under custody by the local police.
Caroline Flynn
Caroline Flynn is a composer, songwriter, and performer currently living in Kalamazoo, Michigan. Her work, while covering a variety of genres and styles, typically has an emphasis on voice and text, glitch, and feelings of uncanny valley that result from the combination of natural and artificial aural elements. Having earned both a B.A. in creative technologies in music and a B.S. in psychology from Virginia Tech, Caroline integrates this academic background to create music that is concerned with human perception, assumptions, reactions, and emotions. Caroline is currently pursuing a Master of Music in Composition, as well as teaching in the Composition and Multimedia Arts Technology departments, at Western Michigan University.
Dog Days, for analog and digital synthesizers + fixed media
Dog Days for Analog and Digital Synthesizers + Fixed Media is a piece of electronic music that I composed, using an organic method of composition that I learned from Dr. Paulo Chagas of the Experimental Acoustics Research Studio (EARS) at the University of California, Riverside (UCR). I have recorded source audio (the sound of water being squirted into a dog’s bowl), which I subsequently processed. I then employed a spectral analysis of the source audio to help me determine the pitch material for the analog and digital synthesizers. The synthesizers, in turn, were operated like sequencers, playing the melodic material given to them, which itself was derived from pitch classes yielded by the above-mentioned spectral analysis. Additional digital pad synthesizers play sonorities, towards the end of the piece, which are derived from the melodic material or series. It is my hope that, as a result of this organic process, the pitch-based material and the fixed media material comprise a unified whole. Most importantly, I hope you find it musically meaningful in some way. Thank you for listening!
Patrick Gibson
Patrick Gibson is a composer, educator, and electric guitarist based in Anaheim, CA. He holds a Ph.D. in Digital Music Composition from the University of California, Riverside; an M.M. in Music Composition from the Bob Cole Conservatory of Music at California State University Long Beach; an M.Ed. in Cross-Cultural Education from National University; and a B.A. in Music Theory and Voice from Loyola Marymount University (LMU), Los Angeles, CA. He composes and conducts concert music for the Martians Chamber Group; composes concert music for soloists, chamber ensembles, concert bands, and orchestras; conducts and directs elementary and middle school choirs; and performs his original songs with his band, Kirk Out. He composes customized media music for clients, as well as for academic projects. He is General Music Teacher and Choir Director for four schools in the Long Beach Unified School District, in Long Beach, CA. His works have been performed by members of the California E.A.R. Unit, HUB New Music, and the Martians Chamber Group, among others. His work has appeared at Electroacoustic Music festivals in New York City, at Purdue University, and in Vaulx-en-Velin, France.
CLUB CONCERT #3 – Somax2
Wednesday, June 11; 10:00pm – Midnight
The 88 Club, Kings Dining & Entertainment, 50 Dalton St, Boston
ID
Title
Author(s)
Performers
Somax2 improvisations
ICMC Delegates Participating in the Somax2 Workshop (Wednesday Morning, June 11)
Individuals participating in the Somax2 workshop on Wednesday morning, June 11, at The Loft, Berklee College of Music
eTu{d,b}e à trois
eTu{d,b}e is a performance series including improvising musical agents and a performer on the eTube, an augmented instrument utilizing a saxophone or clarinet mouthpiece and custom controller interface. The eTube is light, flexible, and directional, producing guttural growls to sonic bursts, supported by an immersive soundscape created by the agents.
For ICMC this performance would be expanded to include performers on the feedback saxophone, a saxophone augmented with a microphone, amplified loudspeaker, and a custom signal processor that allows for precise control over the amplitude, tuning, and timbre of feedback tone; as well as the augmented clarinet, a traditional clarinet augmented with a microphone
Our work reimagines what interactive improvisation practices encompass through performance with improvising musical agents alongside human performers. The agents are trained on audio corpora and create musical phrases in response to a live microphone input.
Kasey Pocius, Tommy Davis, Greg Bruce, Maryse Legault, Vincent Cusson
Kasey Pocius
Originally from St. John’s, Newfoundland, Kasey Pocius is a gender-fluid intermedia artist based in Montreal who grew up experimenting with multimedia software while also pursuing classical training in both viola and piano. Outside of fixed electronic works, they have also pursued mixed-media performances with live electronics, both as a soloist and in comprovisatory collaborative environments. They are particularly interested in multichannel spatialization, and how this can be used in group improvisatory experiences. Ranging from institutions such as Harvard and CIRMMT to DIY galleries, Pocius’ live and fixed media works with electronics have been programmed at dozens of local and international festivals and conferences Europe, the Americas, Oceania and Asia, including ICMC, BEAST, ACMA, Festival de la Imagen, Lux Magna, Sound Symposium and many others. They are a Part-time faculty member at Concordia and a researcher at the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) and Input Devices andMusic Interaction Laboratory (IDMIL), as well as the Technical Director for the Groupe de Recherche sur la Médiatisation du Son (GRMS).
Maryse Legault, whose playing has been described as “transcendent” by the Wall Street Journal, received her master’s degree in historical clarinet performance from the Koninklijk Conservatorium Den Haag in 2017, under Eric Hoeprich. Legault regularly joins numerous period-instrument ensembles in Canada and internationally, including Teatro Nuovo, Tafelmusik Baroque Orchestra, The Handel & Haydn Society, Arion Orchestre Baroque, MusicAeterna and Les Siècle. Her debut solo album Around Baermann, recorded with Gili Loftus, has received critical acclaim since its release on the Leaf Music label in 2023.
Maryse premiered her first piece for clarinet, live signal processing and fixed media, A Hundred Waves, based on the sound world of synthesizer pioneer Suzanne Ciani, at live@CIRMMT in December 2022. She also collaborated with composer, sound and video artist Pierre-Luc Lecours to create a series of progressive studies for the modular synthesizer as part of a CIRMMT-supported research project. In 2024, she completed a residency at Elektronmusik Studion in Stockholm, where she recorded the basic material for her next album on the famous Buchla 200.
Greg Bruce is a saxophonist, improviser, and composer searching for new sounds and modes of performance through obsolete technology. His work is an anarchistic amalgam of contemporary classical techniques, improvised folk melodies, and minimalist ostinati. He harnesses the forgotten power of acoustic feedback, contact microphones, and tape media, using breath energy to drive his saxophones through visceral machines. Through this work he investigates the human/machine dialectic and seeks to invoke a post-digital future: when art and science are noisily wrested from their slavery to the algorithm.
Hailing from St. John’s, Newfoundland and Labrador, Greg Bruce is a Doctor of Musical Arts who has spent over 20 years in the music industry as a saxophonist, composer/arranger, and band leader. During this time Greg has drawn on his training in classical and jazz to record, tour, and perform in every setting imaginable: from conferences in the US to street parties in Canada to palaces in Russia. As an educator, Greg worked as Applied Music Instructor at the College of the North Atlantic; he has taught privately most of his career; he has led workshops and masterclasses at the University of Toronto; and worked as a per-course instructor at Memorial University. Since fall 2024, Greg has been working as a postdoctoral scholar at McGill University in Montréal, QC.
Kasey Pocius, Tommy Davis, Greg Bruce, Maryse Legault, Vincent Cusson
Kasey Pocius
Originally from St. John’s, Newfoundland, Kasey Pocius is a gender-fluid intermedia artist based in Montreal who grew up experimenting with multimedia software while also pursuing classical training in both viola and piano. Outside of fixed electronic works, they have also pursued mixed-media performances with live electronics, both as a soloist and in comprovisatory collaborative environments. They are particularly interested in multichannel spatialization, and how this can be used in group improvisatory experiences. Ranging from institutions such as Harvard and CIRMMT to DIY galleries, Pocius’ live and fixed media works with electronics have been programmed at dozens of local and international festivals and conferences Europe, the Americas, Oceania and Asia, including ICMC, BEAST, ACMA, Festival de la Imagen, Lux Magna, Sound Symposium and many others. They are a Part-time faculty member at Concordia and a researcher at the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) and Input Devices andMusic Interaction Laboratory (IDMIL), as well as the Technical Director for the Groupe de Recherche sur la Médiatisation du Son (GRMS).
Maryse Legault, whose playing has been described as “transcendent” by the Wall Street Journal, received her master’s degree in historical clarinet performance from the Koninklijk Conservatorium Den Haag in 2017, under Eric Hoeprich. Legault regularly joins numerous period-instrument ensembles in Canada and internationally, including Teatro Nuovo, Tafelmusik Baroque Orchestra, The Handel & Haydn Society, Arion Orchestre Baroque, MusicAeterna and Les Siècle. Her debut solo album Around Baermann, recorded with Gili Loftus, has received critical acclaim since its release on the Leaf Music label in 2023.
Maryse premiered her first piece for clarinet, live signal processing and fixed media, A Hundred Waves, based on the sound world of synthesizer pioneer Suzanne Ciani, at live@CIRMMT in December 2022. She also collaborated with composer, sound and video artist Pierre-Luc Lecours to create a series of progressive studies for the modular synthesizer as part of a CIRMMT-supported research project. In 2024, she completed a residency at Elektronmusik Studion in Stockholm, where she recorded the basic material for her next album on the famous Buchla 200.
Greg Bruce is a saxophonist, improviser, and composer searching for new sounds and modes of performance through obsolete technology. His work is an anarchistic amalgam of contemporary classical techniques, improvised folk melodies, and minimalist ostinati. He harnesses the forgotten power of acoustic feedback, contact microphones, and tape media, using breath energy to drive his saxophones through visceral machines. Through this work he investigates the human/machine dialectic and seeks to invoke a post-digital future: when art and science are noisily wrested from their slavery to the algorithm.
Hailing from St. John’s, Newfoundland and Labrador, Greg Bruce is a Doctor of Musical Arts who has spent over 20 years in the music industry as a saxophonist, composer/arranger, and band leader. During this time Greg has drawn on his training in classical and jazz to record, tour, and perform in every setting imaginable: from conferences in the US to street parties in Canada to palaces in Russia. As an educator, Greg worked as Applied Music Instructor at the College of the North Atlantic; he has taught privately most of his career; he has led workshops and masterclasses at the University of Toronto; and worked as a per-course instructor at Memorial University. Since fall 2024, Greg has been working as a postdoctoral scholar at McGill University in Montréal, QC.
#BUSHWICK
Set against the vibrant backdrop of Bushwick, a diverse neighborhood in Brooklyn, NYC, this composition explores the essence of community through a beat story narrative in musical structure. Each section of the piece introduces musical motifs that function like characters, evolving and propelling the story forward. The enigmatic @TheBrooklynPoet etches fleeting graffiti of calm and beauty on the sidewalks, capturing moments of serenity in Bushwick’s most unexpected places. Amidst these reflections, the pulsating energy of the neighborhood emerges. As the narrative unfolds, the structure itself gives way to the community’s unyielding rhythm– persistent, chaotic beats clash and weave together in a struggle for cohesion, reflecting the raw, improvisatory spirit of Bushwick’s dynamic energy.
Special thanks to cellist Madeleine Shapiro for her contributions to the fixed media.
Mary Simoni
Mary Simoni is a composer, pianist, author, educator, consultant, and administrator. She currently serves as the Special Advisor to the Provost at Rensselaer Polytechnic Institute and Professor Emerita, Performing Arts Technology at the University of Michigan. Her music and multimedia works have been performed in Asia, Europe, and throughout the United States and have been recorded by Centaur Records, the Leonardo Music Journal published by the MIT Press, and the International Computer Music Association. She is the recipient of the Prize in Composition by the ArtNET Virtual Museum and named a semi-finalist for the American Prize in Composition- Chamber Music. She has authored several books, including Algorithmic Composition: A Guide to Composing Music with Nyquist, co-authored with Roger Dannenberg, and and Analytical Methods of Electroacoustic Music. She is a Medal Laureate of the Computer World Honors Award for her research in digital Music Information Retrieval. Her work as a pianist and Steinway Artist specializes in the use of interactive performance systems that extend the sonic capabilities of traditional acoustic instruments. She has consulted for the Canadian Innovation Foundation, the National Science Foundation, the National Endowment for the Humanities, the National Peace Foundation, and numerous universities and arts agencies throughout the world. The Knight Foundation, the Kellogg Foundation, the National Science Foundation, and the Michigan Council for the Arts and Cultural Affairs have funded her research.
Philipp Stäudlin of the Hinge Quartet
Window Dressing
Window Dressing is an improvisation for cello and algorithmic accompaniment system. The cello’s sound is analyzed in real-time using perceptual audio descriptors and machine learning algorithms, which generate data that drives various synthesis engines. The system listens to the live input and generates musical phrases, creating a reciprocal exchange between performer and technology.
The performer embraces a new improvisational approach, emphasizing free-form textures and extended cello techniques. The system detects gestures and articulations, responding to nuances in bowing, dynamics, and rhythm. The performer guides the system using an acoustic musical language, creating an active, evolving dialogue between acoustic and synthesized sound.
The system is composed of several modules that analyze pitch, harmony, rhythm, dynamics, and timbre over short, medium and long time windows. The synthesis engine, built in MaxMSP, uses a combination of concatenative synthesis, physical modeling, and other techniques to process and manipulate the live sound.
The system has two primary goals. The first is to free the performer from traditional controllers like MIDI keyboards or foot switches, enabling full immersion in the music and fostering a more mindful, intuitive collaboration. The second is to ensure the audience can intuitively perceive the connection between the performer’s gestures and the sound being generated. Often in performances with live electronics, it’s unclear what is happening in real time versus what is pre-recorded, which can diminish the “liveness” of a live performance. This system aims to make the real-time interaction unmistakable, restoring the dynamic energy that defines live music.
Unlike the generative AI art that is quickly becoming ubiquitous in media, the machine learning techniques employed in this piece function more as a translator than a creator. They refine raw audio signals into musically meaningful data, which is then used to control conventional synthesis techniques such as samplers, oscillators, delays etc. In contrast to AI systems that “learn” from large datasets to generate new content, this system is highly specific, designed to listen and respond to the performer, and playback phrases constructed from a curated library of sounds. Its decisions in response to the live audio are based on the musical-preferences programmed into the system.
Programming this system is akin to composing a score. However, unlike a conventional score, the musical language is a dynamic system of cause and effect, which changes depending on the nature of the performance.
The goal is to maintain the expressiveness and variety of sound generation while removing the need for technical interventions by the performer. The cello is the only controller, the gestures and articulation are the instructions. This enables a greater focus and presence by the performer.
James Staub
James Staub is a composer, programmer, sound artist, and multi-instrumentalist whose work bridges the worlds of acoustic performance and machine intelligence. A longtime guitarist, he began playing the cello in 2022, embracing the instrument as an experimental improviser. His creative practice focuses the cello’s timbral vocabulary and extending its voice through interactive audio processing systems.
James’s approach to electroacoustic improvisation involves designing software environments that analyze sound for musically relevant characteristics and respond in expressive, non-linear ways. Though these systems are complex, they are not chaotic—the sound they produce correlates with the energy and phrasing of the performer, creating a dynamic interplay between human and machine. By pairing an acoustic instrument with these performance systems, James explores call-and- response dynamics that are both intuitive and compelling.
Research in music information retrieval and descriptor-based concatenative synthesis forms the backbone of much of James’s work. He is particularly interested in instrument augmentation, data sonification, and extending the expressive range of acoustic instruments. His projects often blur the line between performer and electronics, inviting audiences to experience a real-time collaboration between human and algorithm.
In 2024, James released ARC!, an EP of improvisational music for cello and machine-listening software. In addition to his performance work, James is the developer of _Euclip Drum Console_, a collaborative, programmable, web-based drum machine app. This hybrid tool combines the structure of a DAW-style user interface with the creative engineering potential of live coding,
James holds a dual degree in Music Technology and Interactive Media from Northeastern University, and works as a software developer in the music notation space.
James Staub
James Staub is a composer, programmer, sound artist, and multi-instrumentalist whose work bridges the worlds of acoustic performance and machine intelligence. A longtime guitarist, he began playing the cello in 2022, embracing the instrument as an experimental improviser. His creative practice focuses the cello’s timbral vocabulary and extending its voice through interactive audio processing systems.
James’s approach to electroacoustic improvisation involves designing software environments that analyze sound for musically relevant characteristics and respond in expressive, non-linear ways. Though these systems are complex, they are not chaotic—the sound they produce correlates with the energy and phrasing of the performer, creating a dynamic interplay between human and machine. By pairing an acoustic instrument with these performance systems, James explores call-and- response dynamics that are both intuitive and compelling.
Research in music information retrieval and descriptor-based concatenative synthesis forms the backbone of much of James’s work. He is particularly interested in instrument augmentation, data sonification, and extending the expressive range of acoustic instruments. His projects often blur the line between performer and electronics, inviting audiences to experience a real-time collaboration between human and algorithm.
In 2024, James released ARC!, an EP of improvisational music for cello and machine-listening software. In addition to his performance work, James is the developer of _Euclip Drum Console_, a collaborative, programmable, web-based drum machine app. This hybrid tool combines the structure of a DAW-style user interface with the creative engineering potential of live coding,
James holds a dual degree in Music Technology and Interactive Media from Northeastern University, and works as a software developer in the music notation space.
On Wood, for performer and electronics
On Wood explores the intersection of human curiosity and artificial intelligence through a semi-improvised interaction between a performer and three AI-driven (Somax2) systems. The performer engages with a wooden string instrument distinctively augmented with transducer speakers. Electronic processing allows the wood of the instrument itself to become a primary resonating body, transforming it into both a sound source and a responsive, vibrating entity.
Guided by text-based instructions, the performer approaches the instrument with a spirit of discovery, akin to a first encounter with an unknown object. This process involves examining its surfaces, coaxing sounds from it, and imagining its sonic potential, eventually leading to the formation of chords and melodic ideas. The electronic process also generates spontaneous pitch materials arising from the combined interaction of the performer’s physical touch, the resonant properties of the wood, electronic oscillators, and the AI’s responses through ring modulation.
The three Somax2 performers dynamically respond to the human performer’s and one another’s actions. This interaction fosters an evolving and intricate musical dialogue. Through the performer’s exploration and the AI’s contributions, the piece may evoke or suggest a range of imagined sonic instances, from human hum and dialogue fragments, gayageum and acoustic guitar, to woodblocks and janggu. These evocations arise from the transformative sound textures and the intuitive, exploratory nature of the performance.
On Wood centers on this nuanced interplay between the physical instrument, the performer’s gestures, and the intelligent electronic soundscape, creating a performance of interactive discovery.
Joogwang Lim
Joogwang Lim (b.1992) is a Korean composer based in Boston. His musical style is characterized by a unique fusion of various ideas, including myths, stories, plays, religious texts, ideologies, and scientific theories. Central to his compositions are the lively narratives, the foundation for the firm and intricate structures he organizes, and the physicality of live performance. Lim’s music is replete with expressive elements, such as texts, symbols, and leitmotifs, which enrich the listening experience. In addition to exploring narrative and structure, he challenges audience perceptions and preconceptions through musical irony, unconventional spatial arrangements, multimedia projections that contradict expectations, and incorporation of theatrical elements.
He earned a B.Mus. in composition at Seoul National University and an M.M. and D.M.A. in composition at Boston University. He is currently living in Boston, United States.
Joogwang Lim
Joogwang Lim (b.1992) is a Korean composer based in Boston. His musical style is characterized by a unique fusion of various ideas, including myths, stories, plays, religious texts, ideologies, and scientific theories. Central to his compositions are the lively narratives, the foundation for the firm and intricate structures he organizes, and the physicality of live performance. Lim’s music is replete with expressive elements, such as texts, symbols, and leitmotifs, which enrich the listening experience. In addition to exploring narrative and structure, he challenges audience perceptions and preconceptions through musical irony, unconventional spatial arrangements, multimedia projections that contradict expectations, and incorporation of theatrical elements.
He earned a B.Mus. in composition at Seoul National University and an M.M. and D.M.A. in composition at Boston University. He is currently living in Boston, United States.
Etude, for violinist and two Somax2 players
This is the first piece of a series of etudes for Somax2 and acoustic instruments. The acoustic part here is created based on Dong Zhou’s long-term project “Found Violin”.
Dong Zhou
Dong Zhou (no pronouns) is a composer-performer based in Hamburg. Zhou gained a B.A. in music engineering at the Shanghai Conservatory and an M.A. in multimedia composition at the Hamburg University of Music and Drama. Zhou won several prizes, including the first prize in the 2018 ICMC Hacker-N- Makerthon, the finalist of the 2019 Deutscher Musikwettbewerb, and the Nota-n- ear Award 2022. Zhou had works included in the ‘Sound of World’ Microsoft ringtones collection and was commissioned by festivals and institutions such as the Shanghai International Art Festival, ZKM Karlsruhe, Stimme X Festival, etc. Zhou is currently a doctoral candidate in ICAM of Leuphana University, a member of Stimme X e. V. Zeitgenössisches Musiktheater Norddeutschland and Deutscher Komponistenverband Hamburg.
Dong Zhou
Dong Zhou (no pronouns) is a composer-performer based in Hamburg. Zhou gained a B.A. in music engineering at the Shanghai Conservatory and an M.A. in multimedia composition at the Hamburg University of Music and Drama. Zhou won several prizes, including the first prize in the 2018 ICMC Hacker-N- Makerthon, the finalist of the 2019 Deutscher Musikwettbewerb, and the Nota-n- ear Award 2022. Zhou had works included in the ‘Sound of World’ Microsoft ringtones collection and was commissioned by festivals and institutions such as the Shanghai International Art Festival, ZKM Karlsruhe, Stimme X Festival, etc. Zhou is currently a doctoral candidate in ICAM of Leuphana University, a member of Stimme X e. V. Zeitgenössisches Musiktheater Norddeutschland and Deutscher Komponistenverband Hamburg.
DER BLÜTENZWEIG
The inspiration for this piece comes from Hermann Hesse’s poem <Der Blütenzweig>.The flower branch sways up and down in the wind, wandering like a child through the alternating days of light and darkness.
YiTing Shao
Originally from China, currently pursuing a PhD at Hanyang University in South Korea. Wishing you a wonderful day.
YiTing Shao
Originally from China, currently pursuing a PhD at Hanyang University in South Korea. Wishing you a wonderful day.
Inclusive co-creative telematic performance using Somax2
This performance explores the integration of AI-driven co-creation, telematic music-making, and accessibility, using JackTrip for real-time, ultra-low-latency audio streaming and Somax2, an AI-based interactive software developed at IRCAM. AI functions as a dynamic musical partner, responding to and augmenting human input in real time. It enhances artistic agency, adapting to live musical gestures and shaping the evolving sonic landscape.
At the core of the performance is a dialogical interaction between human musicians and AI, actively engages with the performers, extending their musical intentions. The improvisational nature of this iteration foregrounds spontaneity, revealing how AI can foster new modes of real-time musical co-creation. Future explorations will expand beyond improvisation to structured ensemble settings and real-time compositional processes, further refining the balance between human intention and algorithmic response.
By leveraging networked technologies, this performance extends the possibilities of musical collaboration beyond geographical constraints. Remote interaction is not merely a technical solution but a means of broadening participation and access, particularly for musicians with disabilities and artists from underrepresented communities. Telematic performance reconfigures traditional roles in music-making, fostering more distributed authorship and collaborative artistic decision-making.
The integration of AI into this setting also raises critical aesthetic and ethical questions. How does AI influence musical authorship? How can technology be shaped to reflect diverse creative voices? These concerns drive ongoing research into the intersection of human-machine collaboration, inclusivity, and artistic agency.
Ultimately, this work envisions AI and telematic music as tools for expanding expression—fostering a more open, participatory, and equitable landscape for musical creation.
Hans Kretz
Hans Kretz is a conductor, pianist, educator and researcher working across philosophy and music. He holds a PhD in Practice-Based Music from the University of Leeds and a PhD in Philosophy from the University of Paris 8 Vincennes-Saint-Denis. He integrates his roles as an improviser, conductor, and creative artist with technological experimentation, informed by his experience collaborating with the Center for Computer Research in Music and Acoustics (CCRMA) in network performance and AI-driven co-creativity. His musical endeavors are part of an ongoing practice-based research that seeks to reassess the intersections of social arts, social sciences, and engineering sciences through the lens of judgment and artistic anthropology, through participation in projects such as the EU Horizon project Multisensory, User-centred, Shared cultural Experiences through Interactive Technologies. His writings have appeared in the Recherches d’Esthétique Transculturelle series by L’Harmattan and in the Cahiers Critiques de Philosophie. His PhD in philosophy, Esthétique transculturelle et philosophies du jugement, was recently published by L’Harmattan. Recent conferences include CeReNeM (University of Huddersfield), the Symposium on Computer Music Multidisciplinary Research (CMMR) in Tokyo, the IRCAM Forum, and the Orpheus Institute.
Ewe Larsson, Joel Mansour, Patricia Alessandrini, Hans Kretz, Cássia Carrascoza Bomfim, Tatiana Catanzaro, Constantin Basica, & Jan Hansen
Hans Kretz is a conductor, pianist, educator and researcher working across philosophy and music. He holds a PhD in Practice-Based Music from the University of Leeds and a PhD in Philosophy from the University of Paris 8 Vincennes-Saint-Denis. He integrates his roles as an improviser, conductor, and creative artist with technological experimentation, informed by his experience collaborating with the Center for Computer Research in Music and Acoustics (CCRMA) in network performance and AI-driven co-creativity. His musical endeavors are part of an ongoing practice-based research that seeks to reassess the intersections of social arts, social sciences, and engineering sciences through the lens of judgment and artistic anthropology, through participation in projects such as the EU Horizon project Multisensory, User-centred, Shared cultural Experiences through Interactive Technologies. His writings have appeared in the Recherches d’Esthétique Transculturelle series by L’Harmattan and in the Cahiers Critiques de Philosophie. His PhD in philosophy, Esthétique transculturelle et philosophies du jugement, was recently published by L’Harmattan. Recent conferences include CeReNeM (University of Huddersfield), the Symposium on Computer Music Multidisciplinary Research (CMMR) in Tokyo, the IRCAM Forum, and the Orpheus Institute.
CLUB CONCERT #4
Thursday, June 12; 10:00pm – Midnight
Raytheon Amphitheatre (240 Egan Research Center), Northeastern University
ID
Title
Author(s)
Performers
Csound in the Metaverse improvisations
ICMC Delegates Participating in the "Csound in the Metaverse" Workshop (Thursday Morning, June 11)
Zero Gained
Zero Gained presents a dystopian narrative without words—an elegy for civilizations consumed by their own creations. At its core, this is music that resists easy listening. It demands that you witness what remains when the signal fades and the world falls silent… leaving behind only rusted skies and the pulse of machines dreaming in ruin.
The full album consists of thirteen tracks, three of which are presented at this year’s ICMC:
1. Fragments of Fire
Beneath a blood-orange sky, embers stirred in silence. Flames pulsed through broken cities, unraveling what was left. Steel cracked, towers folded. Heat rose in waves, searing the air, leaving nothing untouched. The earth trembled as fire claimed it piece by piece.
3. Echoes of Extinction
The grinding gears of war-machines drowned out the dead world’s final whispers. The air reeked of dried, burned soil. In the distance, the dying hum of a lone neon sign flickered against the dark, lost in the echoes of extinction.
13. Zero Gained
Walls of sound shattered the silence—a relentless wave crashing, each hit harder than the last. The ground shook with the violence of a thousand battles, the air thick with the stench of blood. There was no peace, only relentless conflict, marked by the pulse of survival.
Krzysztof Wolek (University of Louisville)*
Krzysztof Wołek (b. 1976, Bytom, Poland) is a versatile composer, improviser, and electronic music performer. A Professor of Music Composition and Director of Digital Composition Studies at the University of Louisville, he has received numerous awards and commissions, including the 2025 Guggenheim Fellowship, the Kentucky Arts Council Al Smith Fellowship, and the Century Fellowship from the University of Chicago. His works span a broad spectrum, from purely acoustic, improvisational, and electronic to various forms of multidisciplinary collaboration.
As a composer and performer, Wołek has collaborated with prominent ensembles and artists, including TALEA, the Grossman Ensemble, the National Polish Radio Symphony Orchestra (NOSPR), Camerata Silesia, and long-standing collaborators like Agata Zubel, Małgorzata Walentynowicz, Tempo Reale, and Ensemble OMN. His recordings appear on ANAKLASIS and CD Accord labels, and his works are published by PWM Music Edition.
Krzysztof Wolek (University of Louisville)*
Krzysztof Wołek (b. 1976, Bytom, Poland) is a versatile composer, improviser, and electronic music performer. A Professor of Music Composition and Director of Digital Composition Studies at the University of Louisville, he has received numerous awards and commissions, including the 2025 Guggenheim Fellowship, the Kentucky Arts Council Al Smith Fellowship, and the Century Fellowship from the University of Chicago. His works span a broad spectrum, from purely acoustic, improvisational, and electronic to various forms of multidisciplinary collaboration.
As a composer and performer, Wołek has collaborated with prominent ensembles and artists, including TALEA, the Grossman Ensemble, the National Polish Radio Symphony Orchestra (NOSPR), Camerata Silesia, and long-standing collaborators like Agata Zubel, Małgorzata Walentynowicz, Tempo Reale, and Ensemble OMN. His recordings appear on ANAKLASIS and CD Accord labels, and his works are published by PWM Music Edition.
Modular Risset Rhythm Etude
Risset rhythms create the illusion of perpetual acceleration or deceleration, even though physical tempo cannot increase or decrease forever. This paradox is resolved through layered rhythmic cycles that shift in phase and amplitude. The result is a temporal illusion: the rhythm feels like it’s always speeding up or slowing down, yet the piece goes on, suspended in a continuous perceptual spiral.
Matthew Davidson
Matthew Davidson is an Associate Professor at Berklee College of Music.
Matthew Davidson
Matthew Davidson is an Associate Professor at Berklee College of Music.
The "Dance" in the Place Congress
The “Dance” in the Place Congress. In New Orleans, a place known as Congo Square has held numerous festivals of music and dance called, “Dance in the Place Congo”. Such named events were held in the Congo nation as well. In some African countries, Ghana, e.g., there is no separate word for music – it exists only in tandem with dance.
This piece is intended as a parody of Insurrectionist-in-chief Donald Trump’s Big Lie that the 2020 election was stolen from him and his repeated lies and attempts to overturn the results that ultimately lead to the January 6th Insurrection at the Capital. The quotation marks around ‘Dance’ in the title refers to the fact that the attack on the Capitol was an Insurrection despite repeated attempts to “dance” around that fact by calling it merely a peaceful protest gone awry, a sight-seeing tour, a false flag operation orchestrated by Antifa, or a stop the steal rally, etc. “The Place Congress” is the Capital itself. The words/vocal sounds used in this piece are exclusively those of Donald Trump mainly taken from the speeches given the day of the attack. (A few are from prior to January 6th.)
The words have been subjected to many digital manipulations, including transposition, time-stretching, vocoding, granulation, shuffling, filtering of various kinds, and splitting of some words into component phonemes, e.g., to create the word “dance”. As a “tribute” to Trump’s incessant lying, bloviating, and projection, I used rap (with pop-style accompaniments) as an essential component, in addition to turning many of Trump’s despicable putdowns of others back onto himself.
The final portion of the piece, which follows actual recordings of the protests and rioting (with the bass in a guided improvisation “accompanying” it) reflects the sadness and grief I felt at nearly losing our Democracy that day – a danger that is now growing worse with Trump’s re-election, his unconscionable pardoning of the January 6th rioters, and his many illegal and/or unconstitutional directives as well as his blatant corruption.
This piece was written for my longtime friend, colleague, and stellar musician, Andrew Kohn.
David Taddie
David Taddie received the BA and MM in composition from Cleveland State University where he studied with Bain Murray and Edwin London, and the Ph.D from Harvard University where he studied with Donald Martino, Bernard Rands, and Mario Davidovsky. He has written music for band, orchestra, choir, solo voice, and a wide variety of chamber ensembles as well as electroacoustic music. His music has been widely performed in the United States, Europe, Asia, and Australia by numerous soloists and ensembles including the Cleveland Orchestra, Cleveland Chamber Symphony; the University of Iowa, University of Miami (FL), Kent State University, and West Virginia University Symphony Orchestras; Alea III, the New Millennium Ensemble, the California Ear Unit, the Core Ensemble, the Cabrini Quartet, the Mendelssohn String Quartet, the Portland Chamber Players, the Gregg Smith Singers, and many others. He has received several prestigious awards including ones from the American Academy of Arts and Letters, the Koussevitzky Foundation, the Fromm Foundation, and the Music Teachers David Taddie National Association. Recordings of his music can be heard on the Ravello, New Focus, and SEAMUS CD labels.
He is currently Emeritus Professor of Music at West Virginia University. He lives in Morgantown with his wife, Karen, and in addition to making music, enjoys spending time with his grandkids as well as gardening and speaker building.
Andrew Kohn, double bass
Andrew Kohn teaches string bass, music theory, chamber music, and music composition at West Virginia University. The former principal bassist of the National Chamber Orchestra (now the National Philharmonic) and the Harrisburg Symphony, he is a member of the Pittsburgh Ballet Theatre, Opera, and Opera Theatre Orchestras. A member of the Board of the International Society of Bassists, he has performed, lectured and adjudicated at several international conventions in Italy, Poland, and the U.S. His publications concerning bass repertoire and pedagogy have addressed Bach, Chihara, Koussevitzky, Marcello, Rabbath, Rossini, Simandl, women composers, and several pedagogical topics, and include over 40 reviews for American String Teacher. He also edited and published the collected writings of his first college bass professor, Theron McClure. He has released recordings on Albany, Music Minus One, Ravello, and self-releases.
Dr. Kohn’s activities as a music theorist include conference papers and publications concerning Bach, Edward T. Cone, Dallapiccola, Pärt, Poulenc, and Wolpe. He is an active composer, with an emphasis on choral music and instrumental solos and duos, with compositions and editions available through Jason Heath’s website and Sheet Music Plus.
Synthrospection: Exploring Audio Feedback in Live Coding
The performance, Synthrospection: Exploring Audio Feedback in Live Coding, explores the creative possibilities of audio feedback within live coding, an improvisational art where music and sound are generated in real-time through code. Feedback, the process of routing output signals back into input, produces dynamic and unpredictable sonic effects, from subtle resonances to intense oscillations. By incorporating feedback loops into this context, the performance examines how these loops can create evolving soundscapes that surpass initial creative intentions.
Audio feedback plays a central role in this performance. The system integrates feedback mechanisms into SynthDefs (SuperCollider synthesis definitions) controlled by TidalCycles, a live coding platform for musical patterns. Sounds are continuously reintroduced into the system, processed, and altered in real-time, fostering a recursive interplay between performer, code, and sound. This process mirrors the way a guitarist interacts with feedback from their amplifier, creating a dynamic dialogue between input and output that reshapes the sonic landscape.
The improvisational aspect of live coding adds complexity, requiring the performer to adapt their code in response to the evolving feedback-generated sounds. This interaction highlights feedback as more than a sonic effect—it becomes a tool for real-time exploration, creating a balance between control and emergence. This approach pushes the boundaries of live coding, where the manipulation of audio feedback remains a relatively unexplored area, offering opportunities for innovative sound creation.
While feedback is not new in music, its digital application in live coding offers fresh perspectives. Like a guitarist using feedback to sustain notes or generate overtones, this performance employs digital tools to navigate similar creative terrain. By integrating the unpredictable nature of feedback into the structured framework of live coding, the performance reimagines traditional sonic practices in a modern, code-driven context.
Synthrospection: Exploring Audio Feedback in Live Coding demonstrates how feedback can inspire novel approaches to sound design. By combining improvisation with recursive processes, the performance engages with feedback as a dynamic, interactive element that reshapes live coding. This fusion of structure and spontaneity reveals the potential of feedback to generate unexpected and compelling auditory experiences, encouraging a reexamination of both live coding and feedback as creative practices.
Atsushi Tadokoro
Live coder and creative coder exploring the boundaries of sound and visual art. Associate professor at Maebashi Institute of Technology, and part-time lecturer at Tokyo University of the Arts and Keio University. Born in 1972, he creates musical works by synthesizing sounds using algorithms and improvises with sound and images using a laptop computer. In recent years, he has also presented a variety of audio-visual installation works that have been exhibited internationally. His work has been selected for international conferences, including ICMC (International Computer Music Conference) in 2024, 2015, and 1996; ICLC (International Conference on Live Coding) in 2024, 2020, 2019, 2016, and 2015; and NIME (New Interfaces for Musical Expression) in 2016. He teaches courses on creative coding at university, covering platforms like openFrameworks and Processing. His lecture materials, openly accessible at (https://yoppa.org/), have become an essential resource for many aspiring creative coders, and are used by thousands of students and creators. He is the author of “Beyond Interaction: A Practical Guide to openFrameworks for Creative Coding” (BNN, 2020) and a author of “Performative Programming: The Art and Practice of Live Coding – Show Us Your Screens” (BNN, 2018).
Atsushi Tadokoro
Live coder and creative coder exploring the boundaries of sound and visual art. Associate professor at Maebashi Institute of Technology, and part-time lecturer at Tokyo University of the Arts and Keio University. Born in 1972, he creates musical works by synthesizing sounds using algorithms and improvises with sound and images using a laptop computer. In recent years, he has also presented a variety of audio-visual installation works that have been exhibited internationally. His work has been selected for international conferences, including ICMC (International Computer Music Conference) in 2024, 2015, and 1996; ICLC (International Conference on Live Coding) in 2024, 2020, 2019, 2016, and 2015; and NIME (New Interfaces for Musical Expression) in 2016. He teaches courses on creative coding at university, covering platforms like openFrameworks and Processing. His lecture materials, openly accessible at (https://yoppa.org/), have become an essential resource for many aspiring creative coders, and are used by thousands of students and creators. He is the author of “Beyond Interaction: A Practical Guide to openFrameworks for Creative Coding” (BNN, 2020) and a author of “Performative Programming: The Art and Practice of Live Coding – Show Us Your Screens” (BNN, 2018).
Conjure II
Before the sound, there is a breath. Before the explosion, a threshold. Before the noise, silence. In this performance, the balloon becomes a vessel for memory, pressure, and transformation. Breath, shaped by the mouth and stretched latex, releases noise that resists translation—alien textures of hissing and vibration that unsettle the body’s borders. These sounds reflect the fragility and strength of a body in motion with its past.
Huichun Yang
Huichun Yang is a Providence-based Taiwanese sound artist, improviser, and computer music programmer. Inspired by the physicality of balloons, her performance explores breath as sonic material, speaking with her history and surroundings. She works with pressure and tension that disturb the boundaries of bodies, making balloon noise with latex, breath, and mouth shape to create different resonances—an untranslatable voice, detached from context but visceral and tactile in vibration. She is interested in spatial audio environment that disturb the hierarchy of the listening from linear to parallel.
Huichun has performed at the Residual Noise (2025, Providence), SOUND/IMAGE Festival (2024, London), among other events. She has led livecoding workshops at RISD Digital Media and NYU ITP.
Huichun Yang
Huichun Yang is a Providence-based Taiwanese sound artist, improviser, and computer music programmer. Inspired by the physicality of balloons, her performance explores breath as sonic material, speaking with her history and surroundings. She works with pressure and tension that disturb the boundaries of bodies, making balloon noise with latex, breath, and mouth shape to create different resonances—an untranslatable voice, detached from context but visceral and tactile in vibration. She is interested in spatial audio environment that disturb the hierarchy of the listening from linear to parallel.
Huichun has performed at the Residual Noise (2025, Providence), SOUND/IMAGE Festival (2024, London), among other events. She has led livecoding workshops at RISD Digital Media and NYU ITP.
/nin/
/nin/—derived from the Greek ‘νυν’ (nun), meaning ‘now, at this time’—is an ongoing performance research project exploring the practice of improvisation for solo violin with the T/ensor/~ system developed by the author.
T/ensor/~ (v0.5)—previously presented (v0.3) at NIME 2023 (Christos Michalakos; drums) and InMusic23 (Richard Craig; alto flute)—is a work-in-progress prototype of a dynamic and interactive performance system developed in Max, incorporating adaptive digital signal processing modules and generative processes. Originally developed as part of a six-month artistic research study titled, ‘Improvisation Technologies and Creative Machines: The Performer-Instrument Relational Milieu’ (ITCM), supported by the UK’s AHRC – Creative Informatics (Small Research Grants 2022), T/ensor/~ explores the field of human-computer interaction and the intersection of improvisation, digital augmentation, and machine agency.
At the core of the ITCM project, which led to the development of T/ensor/~, is an epistemological tracing of what George Lewis calls “creative machines” (Lewis, 2021), alongside an exploration of how the [performer– instrument] feedback relationship of “interaction, resonance, and resistance” (Cobussen, 2017), traceable in some free improvisers’ accounts (Goldstein, 1988; Hopkins, 2009), can be encoded into software.
To this end, T/ensor/~ consists of two core components. The first integrates adaptive digital signal processing modules, incorporating machine listening and feature extraction. These are programmed to both augment and extend the acoustic instrument while introducing timbral and temporal displacement to the incoming signal (Emmerson, 2009). The second component is designed to simulate the [performer–instrument] ‘techno-logical’ paradigm presented earlier and features an autonomous feedback [listening–synthesis] agent. This agent involves separate machine listening and feature extraction frameworks, coupled with adaptive digital signal processing modules attached to its output. The two components continuously exchange control and audio information in a dynamic and generative interplay, contributing to a playful “condition of indeterminacy” (Lewis, 2021).
In the context of /nin/, the violinist/improviser, engages in an improvised dialogue with the T/ensor/~ system, mindfully and curiously inhabiting the ‘now’ as a performative present in its unfolding. Here, improvisation is understood both as a research activity that occurs in real-time and is “generative of sound” (Di Scipio, 2015), and as a practice of ‘συν-σχεδιασμός’ [syn-schediasmos: syn- (σύν; “with”) + schediázo (σχεδιάζω): “to draw”]—a ‘co-drawing’ between people, instruments, and technologies, shaping the improvised journey in ‘with-ness’.
At the same time, /nin/ serves as both a mental reminder and a promise: a personal re-engagement with presence following a period of dormancy. This project and performance stand as a testimony to that awakening.
Dimitris Papageorgiou
Dimitris Papageorgiou, PhD, is a Greek violinist, improviser, and composer, and a Lecturer in Music at Edinburgh Napier University. His artistic research and professional activities explore the fields of free improvisation, performance, composition, and music technology, with a focus on contemporary music notation and interactive music systems.
As a violinist and improviser, his practice is centred on the creation of sonic textures, through a performance approach that re-examines and extends the violin’s sound- making possibilities and is guided by attentive listening. He has had the opportunity to perform with a range of exceptional musicians and contribute to several recordings— ‘forest’ (edimpro), ‘Kocher-Manouach-Papageorgiou’, the compilation ‘vs. Interpretation: An Anthology of Improvisation, Vol. 1’.
His compositional work spans instrumental and electroacoustic music, with a particular interest in human-computer interaction and the creation of electro- instrumental music. Drawing on his solo violin free improvisation performance practice, and through close collaborations with performers, he develops notational environments that critically ponder the notion of authorship, blurring the lines between composition and improvisation. Recently, this research has informed the development of T/ensor/~, an interactive performance system exploring the intersection between improvisation, digital augmentation, and machine agency. His most recent artistic research outputs (InMusic23, NIME 2023, PSN 2022, DARE 2019) have sought to theorise the above through the notion of ‘syn-schediasmos’, while also tracing relevant eco-philosophical and ethnomusicological reflections.
As a musician, researcher, and educator, Dimitris has presented his work at a wide range of events, conferences, workshops, and forums across Belgium, Czechia, Finland, Germany, Greece, Italy, Lithuania, Mexico, the Netherlands, Switzerland, the UK, and the US.
Dimitris Papageorgiou
Dimitris Papageorgiou, PhD, is a Greek violinist, improviser, and composer, and a Lecturer in Music at Edinburgh Napier University. His artistic research and professional activities explore the fields of free improvisation, performance, composition, and music technology, with a focus on contemporary music notation and interactive music systems.
As a violinist and improviser, his practice is centred on the creation of sonic textures, through a performance approach that re-examines and extends the violin’s sound- making possibilities and is guided by attentive listening. He has had the opportunity to perform with a range of exceptional musicians and contribute to several recordings— ‘forest’ (edimpro), ‘Kocher-Manouach-Papageorgiou’, the compilation ‘vs. Interpretation: An Anthology of Improvisation, Vol. 1’.
His compositional work spans instrumental and electroacoustic music, with a particular interest in human-computer interaction and the creation of electro- instrumental music. Drawing on his solo violin free improvisation performance practice, and through close collaborations with performers, he develops notational environments that critically ponder the notion of authorship, blurring the lines between composition and improvisation. Recently, this research has informed the development of T/ensor/~, an interactive performance system exploring the intersection between improvisation, digital augmentation, and machine agency. His most recent artistic research outputs (InMusic23, NIME 2023, PSN 2022, DARE 2019) have sought to theorise the above through the notion of ‘syn-schediasmos’, while also tracing relevant eco-philosophical and ethnomusicological reflections.
As a musician, researcher, and educator, Dimitris has presented his work at a wide range of events, conferences, workshops, and forums across Belgium, Czechia, Finland, Germany, Greece, Italy, Lithuania, Mexico, the Netherlands, Switzerland, the UK, and the US.
Human Capital
Scott L. Miller, Adam Vidiksis, & Sam Wells perform free music, marked by propulsive textures and rhythms, graceful emergent structures, and carefully crafted timbres that are seamlessly woven from their composite sounds. Trumpet, drum set, and electronics merge to create atmospheres that range from playful and heartfelt to panoramic and profound. Their debut album, Memory Palace, documents two performances by the trio, recorded during the unprecedented time of physical distance during COVID-19 lockdown. Working telematically, Miller, Vidiksis, and Wells have developed a unique approach among themselves to listening, time, and coordination. The recording demonstrates one point of listening in the complex interflow of time and latency between the performers. Hailing from various locations throughout the US, this trio has cultivated a musical practice that embraces both the challenges of distanced and local free improvisation to create music that is immediate and arresting.
Adam Vidiksis, Scott Miller, Sam Wells
Adam Vidiksis is a drummer and composer based in Philadelphia who explores social structures, science, and the intersection of humankind with the machines we build. His music and artwork examine technological systems as artifacts of human culture, acutely revealed in the slippery area where these spaces meet and overlap—a place of friction, growth, and decay. His compositions and recordings are available through HoneyRock Publishing, EMPiRE Recordings, Fuzzy Panda Recording Company, Mulatta Records, New Focus Records, PARMA Recordings, Navona Records, and Scarp Records. Vidiksis is Assistant Professor of music technology at Temple University, and president of SPLICE Music. He performs in SPLICE Ensemble, aeroidio, Miller/Vidiksis/Wells, Transonic Orchestra, Ensemble N_JP, and directs the Boyer College Electroacoustic Ensemble Project (BEEP).
Scott L. Miller is an American composer best known for his electroacoustic chamber music and ecosystemic performance pieces. Inspired by the inner-workings of sound and the microscopic in the natural and mechanical worlds, his music is the product of hands-on experimentation and collaboration with musicians and performers from across the spectrum of styles. He is a Professor of Music at St. Cloud State University, Minnesota, Past-President (2014-18) of the Society for Electro- Acoustic Music in the U.S. (SEAMUS) and presently Director of SEAMUS Records.
Sam Wells is a musician and video artist based in Philadelphia. He is a member of aeroidio and SPLICE Ensemble, and has performed with Contemporaneous, Metropolis Ensemble, TILT Brass, the Lucerne Festival Academy Orchestra, and the Colorado MahlerFest Orchestra. Sam has recorded on the Scarp Records, New Amsterdam/Nonesuch, New Focus Records, SEAMUS, and Ravello Recordings labels. Sam is a Cycling ’74 Max Certified Trainer and an Assistant Professor of Music Technology at Temple University.
Adam Vidiksis, Scott Miller, Sam Wells
Adam Vidiksis is a drummer and composer based in Philadelphia who explores social structures, science, and the intersection of humankind with the machines we build. His music and artwork examine technological systems as artifacts of human culture, acutely revealed in the slippery area where these spaces meet and overlap—a place of friction, growth, and decay. His compositions and recordings are available through HoneyRock Publishing, EMPiRE Recordings, Fuzzy Panda Recording Company, Mulatta Records, New Focus Records, PARMA Recordings, Navona Records, and Scarp Records. Vidiksis is Assistant Professor of music technology at Temple University, and president of SPLICE Music. He performs in SPLICE Ensemble, aeroidio, Miller/Vidiksis/Wells, Transonic Orchestra, Ensemble N_JP, and directs the Boyer College Electroacoustic Ensemble Project (BEEP).
Scott L. Miller is an American composer best known for his electroacoustic chamber music and ecosystemic performance pieces. Inspired by the inner-workings of sound and the microscopic in the natural and mechanical worlds, his music is the product of hands-on experimentation and collaboration with musicians and performers from across the spectrum of styles. He is a Professor of Music at St. Cloud State University, Minnesota, Past-President (2014-18) of the Society for Electro- Acoustic Music in the U.S. (SEAMUS) and presently Director of SEAMUS Records.
Sam Wells is a musician and video artist based in Philadelphia. He is a member of aeroidio and SPLICE Ensemble, and has performed with Contemporaneous, Metropolis Ensemble, TILT Brass, the Lucerne Festival Academy Orchestra, and the Colorado MahlerFest Orchestra. Sam has recorded on the Scarp Records, New Amsterdam/Nonesuch, New Focus Records, SEAMUS, and Ravello Recordings labels. Sam is a Cycling ’74 Max Certified Trainer and an Assistant Professor of Music Technology at Temple University.
CLUB CONCERT #5
Saturday, June 14 ; 10:00pm – Midnight
Raytheon Amphitheatre (240 Egan Research Center), Northeastern University
ID
Title
Author(s)
Performers
Branches, for cajon and tape
Knocking on wood is an action and sound with various implications. The instrument that truly makes a virtue of it is the cajon. In recent times skilled musicians have turned this instrument into a virtual drum set, pioneering techniques that exploit various striking areas, different parts of the hand, fingers, knuckles, as well as brushes and other kinds of beaters. Along with these techniques have come stock patterns or riffs that offer a colorful alternative to what has already been heard on drum set or on bongos, timbales, congas, etc. Though not free of repeating patterns, the present work is more focused on pulse speed relationships and measured freedom. The opening gesture heard on the live cajon becomes a model for the larger form, with each attack representing a different section of the piece. The tape part expands and contracts this gesture accordingly. These primary “trunks” of the overall form then grow branches that the live cajon part seems to converse with.
Eric Simonson
Eric Simonson‘s music has been heard in concerts across North America, including SEAMUS (Society of Electroacoustic Music in the United States), ICMC (International Computer Music Conference) and SCI (Society of Composers Incorporated) performances. His composition teachers have included William Heinrichs, Harvey Sollberger, Eugene O’Brien and Roger Reynolds. His degrees are in composition, but his interests and teaching experience have involved computer music, music theory and musicology. He studied piano with Boaz Sharon at the University of Tulsa and subsequently enjoyed a brief career as an accompanist and chamber musician. Currently, he serves as a professor at Danville Area Community College in Danville, Illinois, teaching musicology and music theory courses in the Liberal Arts division.
Patti Cudd, cajon
Dr. Patti Cudd is active as a percussion soloist, chamber musician and educator. She teaches 20th Century Music, Introduction to Music, Applied Percussion and conducts the Percussion and New Music Ensembles at the University of Wisconsin-River Falls. Dr. Cudd is also an active performing member of the new music ensemble Zeitgeist. Other diverse performing opportunities have included Sirius, red fish blue fish, CRASH, the Minnesota Contemporary Ensemble, SONOR and such dance companies as the Minnesota Dance Theater and the Borrowed Bones Dance Theater. She received a Doctor of Musical Arts Degree in Contemporary Musical Studies at the University of California studying with Steven Schick, Master of Music Degree at the State University of New York at Buffalo where she worked with Jan Williams, undergraduate studies at the University of Wisconsin-River Falls and studied in the soloist class with a Fulbright Scholarship at the Royal Danish Conservatory of Music in Copenhagen, Denmark.
Ingrained
Ingrained is a collaborative work with Esther Lamneck that refers to the formation of a part of the essence of the inmost being, which is reflected in the nature of the composition itself, the performance of which is “ingrained” in the táragató, the principal sound source of this work. The sonic environment of this composition is the result of superimposition. In the foreground improvisation occurs from the interaction between the instrument and live electronics. The sounds in the live electronics are transformed first through granular synthesis via Max/MSP by capturing typically 5-6 seconds of audio from the microphone into one or more buffers at a time depending on the technician’s preferences. The grains in these buffers are a part of the essence of the musical instrument, and the speeds of those grains, as well as their ranges in transposition, location, and duration, can be manipulated in real time. The grains are further transformed via comb filters, vocoders, flangers, harmonizers, and/or spectral filters, depending on 1) which buffer the recorded audio comes from and 2) the settings of an object called the matrix switch control that is used to determine which audio will be outputted to the speakers as well as how those sounds will be spatialized. As the composition progresses, the technician records more of the performer’s sounds that can be transformed live as previously mentioned. These live electronics are added to the fixed media, which is based on prior recordings of improvisations that were made by the performer and were subsequently manipulated through granular synthesis. In live performance, at the macro level, however, the technician typically follows a predetermined order regarding which sonic transformations are used on the grains in the live electronics and when changes in spatialization occur during the performance. Typically, the piece begins in stereo with a comb filter, then switches to a spectral filter about 2 minutes into the piece, and then switiches to a vocoder about 4 minutes into the piece. As a climax is reached, about 5:30 seconds into the piece, all of these different transformations are used at the same time at a higher level of spatialization. At about 6:30, the technician continues to use all of these transformative processes in the live electronics, but they are localized in different parts of the larger sound field. As the composition reaches the coda, about 8 minutes into the piece, it generally becomes louder to end with a bombastic conclusion.
Jonathan Wilson
Dr. Jonathan Wilson’s works have been performed at the Ann Arbor Film Festival, European Media Art Festival, the Experimental Superstars Film Festival, the Big Muddy Film Festival, ICMC, SICMF, SEAMUS, NYCEMF, NSEME, the Iowa Music Teachers Association State Conference, and the Midwest Composers Symposium. He is the winner of the 2014 Iowa Music Teachers Association Composition Competition. Jonathan has studied composition with Lawrence Fritts, Josh Levine, David Gompper, James Romig, James Caldwell, Paul Paccione, and John Cooper. In addition, studies in conducting have been taken under Richard Hughey and Mike Fansler. Jonathan is a member of Society of Composers, Inc., SEAMUS, ICMA, Iowa Composers Forum, and American Composers Forum.
Esther Lamneck, tárogató
The New York Times calls Esther Lamneck “an astonishing virtuoso”. She has appeared as a soloist with major orchestras, with conductors such as Pierre Boulez, with renowned chamber music artists and an international roster of musicians from the new music improvisation scene. A versatile performer and an advocate of contemporary music, she is known for her work with electronic media including interactive arts, movement, dance and improvisation. Dr. Lamneck makes frequent solo appearances on clarinet and the tárogató at music festivals worldwide. Many of her solo and Duo CDs feature improvisation and electronic music and include “Cigar Smoke”; “ Tárogató ”; “Winds Of The Heart”; “Genoa Sound Cards”; “Stato Liquido” etc. Her latest new music improvisation album, “Small Parts of a Garden” is available at, https://www.setoladimaiale.net/catalogue/view/SM4420. Computer Music Journal calls her “The consummate improvisor.”
Zodiac: A Silent Arctic Interloper
All video footage was collected during the Arctic Circle Artist Residency’s 2024 Fall voyage. Audio field recordings were manipulated by numerical data collected in the arctic using a bouy system called Sofar Ocean.
Josh Rodenberg
Joshua Rodenberg is a multidisciplinary sound and video artist whose innovative work spans immersive audio, experimental film sound design, and multimedia installations. He holds a Bachelor of Science from the University of Southern Indiana and an MFA in Craft/Material Studies from Virginia Commonwealth University, and currently the Head of Innovative Media Studios while serving as an Assistant Professor at VCUarts Qatar. His practice is defined by a passion for capturing natural oscillations and integrating environmental data—such as buoy measurements—into dynamic soundscapes, creating experiences that explore the interplay between technology, nature, and human emotion. Recognized with awards like the VCU Quest research grant and selected for the Arctic Circle Artist Residency, Joshua continually pushes the boundaries of audio art, inviting audiences to rethink their connection with the environment through immersive listening.
Josh Rodenberg
Joshua Rodenberg is a multidisciplinary sound and video artist whose innovative work spans immersive audio, experimental film sound design, and multimedia installations. He holds a Bachelor of Science from the University of Southern Indiana and an MFA in Craft/Material Studies from Virginia Commonwealth University, and currently the Head of Innovative Media Studios while serving as an Assistant Professor at VCUarts Qatar. His practice is defined by a passion for capturing natural oscillations and integrating environmental data—such as buoy measurements—into dynamic soundscapes, creating experiences that explore the interplay between technology, nature, and human emotion. Recognized with awards like the VCU Quest research grant and selected for the Arctic Circle Artist Residency, Joshua continually pushes the boundaries of audio art, inviting audiences to rethink their connection with the environment through immersive listening.
Phosphenes
The phosphenes we see through our eyelids are a unique sensory immersion, present only in the absence of all other visual stimuli. They derive from nothing, not our dreams nor memories, instead manifesting only the purest imagery conjured by our eyes left to their own devices, through which we can delve into the most immersive and evocative of experiences.
Victor Zheng
Dr. Victor Zheng (b. 1994) was born in Beijing, China and raised in Portland, Oregon. He holds degrees from Oberlin Conservatory (BM ’16), the University of Massachusetts Amherst (MM ’18), and the University of Illinois Urbana-Champaign (DMA ’23).
Victor explores the intersection between acoustic and electronic composition in his work, including such topics as algorithmically assisted composition, interactive electronics, and building custom hardware interfaces to control electronic sound. His notable performances have included collaborations with the Opus One Chamber Orchestra, TaiHei Ensemble, Composers of Oregon Chamber Orchestra, New Music Mosaic, and Illinois Modern Ensemble. He has had his music and research featured at events including MOXSonic, Electronic Music Midwest, SEAMUS, NYCEMF, the SCI National Conference, and ICMC, as well as in publications including Art On My Sleeve, Willamette Week, and Oregon Arts Watch.
Victor currently serves on the faculties at North Central College in Naperville, IL and the University of Illinois Springfield in Springfield, IL, teaching composition, music theory, music technology, and music history.
Victor Zheng
Dr. Victor Zheng (b. 1994) was born in Beijing, China and raised in Portland, Oregon. He holds degrees from Oberlin Conservatory (BM ’16), the University of Massachusetts Amherst (MM ’18), and the University of Illinois Urbana-Champaign (DMA ’23).
Victor explores the intersection between acoustic and electronic composition in his work, including such topics as algorithmically assisted composition, interactive electronics, and building custom hardware interfaces to control electronic sound. His notable performances have included collaborations with the Opus One Chamber Orchestra, TaiHei Ensemble, Composers of Oregon Chamber Orchestra, New Music Mosaic, and Illinois Modern Ensemble. He has had his music and research featured at events including MOXSonic, Electronic Music Midwest, SEAMUS, NYCEMF, the SCI National Conference, and ICMC, as well as in publications including Art On My Sleeve, Willamette Week, and Oregon Arts Watch.
Victor currently serves on the faculties at North Central College in Naperville, IL and the University of Illinois Springfield in Springfield, IL, teaching composition, music theory, music technology, and music history.
Gourd Gore Galore
Gourd Gore Galore, originally titled Pumpkin Patch, is an interactive sonification utilizing the sawing sounds of gourd carving. The input from the contact-gourd is used as both a modulation and audio signal source for modular synthesizer. The piece was made in the Halloween spirit, but I’d like to perform it all year round, so any old gourd will have to do when pumpkins aren’t available.
Jeffrey Todd-Voyten
Jeffrey Todd-Voyten is a New England-based electroacoustic composer and classically trained vocalist. His compositional output primarily deals in combining generative sound with improvisatory response through combinations of electronic and acoustical instruments. Originally from Salisbury Maryland, Jeffrey earned his bachelor’s degree in Voice Performance at Salisbury University and received an MM in Performance in 2021 from the University of Kentucky. He is now working on a MM in composition as well as a DMA in Voice Performance while offering private instruction in voice and music theory. Jeffrey’s music has been featured at Electronic Music Midwest, SEAMUS, NYCEMF, Salisbury University, the University of Kentucky Art Museums, and other venues across the states.
Eric Shuster, carver; Jeffrey Todd-Voyten, modular synthesizer
Jeffrey Todd-Voyten is a New England-based electroacoustic composer and classically trained vocalist. His compositional output primarily deals in combining generative sound with improvisatory response through combinations of electronic and acoustical instruments. Originally from Salisbury Maryland, Jeffrey earned his bachelor’s degree in Voice Performance at Salisbury University and received an MM in Performance in 2021 from the University of Kentucky. He is now working on a MM in composition as well as a DMA in Voice Performance while offering private instruction in voice and music theory. Jeffrey’s music has been featured at Electronic Music Midwest, SEAMUS, NYCEMF, Salisbury University, the University of Kentucky Art Museums, and other venues across the states.
Eric Shuster is head of the percussion area and director of the Salisbury University Percussion Ensemble at Salisbury University. He has collaborated with artists and composers for interdisciplinary work and new music, premiered several solo and chamber pieces, and performed nationally and abroad including recent performances at the Percussive Arts Society International Convention, Transplanted Roots Symposium, and Festival No Convencional (Buenos Aires). Shuster holds degrees from Louisiana State University (M.M.) and Kutztown University (B.A.) and has served on the faculty of Salisbury University since 2011.
Blind Opera
Blind Opera unfolds as an immersive odyssey of sound and presence, where masked practitioners wander unseen through the performance space, and every footfall, pause, and ambient resonance shapes an improvised drama of pure listening.
By relinquishing a visual script and traditional scoring, Blind Opera surrenders authorship to the architecture itself: walls, corners, and open air become the score-maker, guiding each gesture and encounter. In this decentralized model, narrative and form emerge from acoustic interactions rather than a central director’s hand.
The piece also dismantles the power dynamics of the theatrical gaze. With sight concealed, performers lose the dual privilege of seeing and being seen—upending hierarchies between creator and observer and prompting a profound reckoning with presence and agency.
For listeners, Blind Opera offers an invitation to reclaim listening as an emancipatory act. In the absence of visual certainty, every sound becomes an opportunity to question how narratives form, where authority lies, and how art might flourish when control is released in favor of collective discovery.
Han Xu, Zehao Wang
Zehao Wang is a computer music researcher, composer, and sound artist. He is currently a PhD candidate in Computer Music and an associate instructor in the Department of Music at the University of California, San Diego. As a composer and sound artist, Wang focuses on the spatiality of sound and music and their theatrical interpretations.
As a computer music researcher, his primary interests include musical acoustics, physical modeling, and instrument design. He has conducted research internships at Microsoft, ByteDance/TikTok, Yamaha, and Apple, and has visited CCRMA at Stanford. He has presented his research at several prestigious institutions, including the KMH in Sweden, NUS, and PKU.
Xu Han ( Beijing China) is a composer, scholar, sound artist, trombone &euphonium player, instrument hacker, coder, and improviser who draws inspiration from Buddhist Philosophy and hands-on aesthetics. Han is currently a postdoctoral fellow at Peking University (School of Arts). Han holds a Doctor of Musical Arts from Cornell University and finished a music master’s degree in composition (graduating with distinction) from the Royal Northern College of Music (RNCM) in Manchester, the UK. Han studies with Adam Gorb, Benjamin D. Piekut, Marianthi Papaleandri- Alexandri, Kevin Ernste, Trevor J. Pinch, Blake Stevens, Kenneth Fields, Emily Howard, and Roberto Sierra.
Han’s works have been performed by London Symphony Orchestra (UK), BBC Singers(UK), Wet Ink Ensemble(USA), Israeli Chamber Project(Israel), Composers Conference Ensemble(USA) with Conductor Vimbayi Kaziboni, Red Desert Ensemble(USA), Yarn/Wire(USA), Greg Stuart(USA), KOE DUO(USA), NOMON(USA), Manchester Camerata(UK),19 SoundLab(China), Tacet(i) Ensemble(Thailand), Cornell Festival Chamber Orchestra(USA), and RNCM Symphony Orchestra(UK).
Han Xu, Zehao Wang
Zehao Wang is a computer music researcher, composer, and sound artist. He is currently a PhD candidate in Computer Music and an associate instructor in the Department of Music at the University of California, San Diego. As a composer and sound artist, Wang focuses on the spatiality of sound and music and their theatrical interpretations.
As a computer music researcher, his primary interests include musical acoustics, physical modeling, and instrument design. He has conducted research internships at Microsoft, ByteDance/TikTok, Yamaha, and Apple, and has visited CCRMA at Stanford. He has presented his research at several prestigious institutions, including the KMH in Sweden, NUS, and PKU.
Xu Han ( Beijing China) is a composer, scholar, sound artist, trombone &euphonium player, instrument hacker, coder, and improviser who draws inspiration from Buddhist Philosophy and hands-on aesthetics. Han is currently a postdoctoral fellow at Peking University (School of Arts). Han holds a Doctor of Musical Arts from Cornell University and finished a music master’s degree in composition (graduating with distinction) from the Royal Northern College of Music (RNCM) in Manchester, the UK. Han studies with Adam Gorb, Benjamin D. Piekut, Marianthi Papaleandri- Alexandri, Kevin Ernste, Trevor J. Pinch, Blake Stevens, Kenneth Fields, Emily Howard, and Roberto Sierra.
Han’s works have been performed by London Symphony Orchestra (UK), BBC Singers(UK), Wet Ink Ensemble(USA), Israeli Chamber Project(Israel), Composers Conference Ensemble(USA) with Conductor Vimbayi Kaziboni, Red Desert Ensemble(USA), Yarn/Wire(USA), Greg Stuart(USA), KOE DUO(USA), NOMON(USA), Manchester Camerata(UK),19 SoundLab(China), Tacet(i) Ensemble(Thailand), Cornell Festival Chamber Orchestra(USA), and RNCM Symphony Orchestra(UK).
Registration is now open!
ICMC BOSTON 2025 can be accessed IN-PERSON and REMOTE). ICMA Members at the time of registration will receive a 25% discount.
Early Bird Registration: pre-May 1, 2025 (15% discount)
Regular Registration: post-May 1, 2025
Contact Us





Sponsored by


























