Performance
L'Ombre: Édith Canat de Chizy & Blanca Li

Reading time: 10mn
Jérémie Szpirglas: Where does the Continuum project come from?
Olivier Warusfel: For several years now, we've been receiving regular requests from the creative community to work with virtual reality—a realm we had only tentatively explored until recently.
Jérémie Szpirglas: But it could be argued that simple sound diffusion (whether stereo or spatialized) is already virtual reality!
Olivier Warusfel: That's true—an amplified concert is already a kind of "augmented" reality, not to mention electroacoustics. However, that's purely audio. With Continuum, we're stepping into multimodal reality by integrating multisensory environments. This represents a paradigm shift and, above all, a change in our IT framework. We now have solutions for augmented and virtual reality—like Unity and Unreal—that originated in the video game industry. These platforms come with built-in sound engines, though they can be somewhat limited. One of our initial technical objectives with Continuum is to integrate specific functions of Spat, the IRCAM spatializer, into these environments so that computer music designers can use their tools seamlessly in multimodal creations. You might call this the first (technical) “continuity” of Continuum: bridging traditional performing arts tools into new digital spaces. Photo: Olivier Warusfel and Markus Noisternig © IRCAM-Centre Pompidou, photo: Déborah Lopatin
Beyond that, on a more scientific level, we quickly encounter challenges, especially ensuring that the visual and auditory elements remain coherent. Whether we aim for absolute perfection or intentionally introduce distortions for creative effect, it ultimately comes down to having precise control over the interplay between sight and sound.
Jérémie Szpirglas: Meaning?
Olivier Warusfel: For example, there’s the challenge of representing the environment we need to manage. With Spat, we're used to this process, but generally it remains arbitrary—driven solely by aesthetic choices, without any preconceived idea of the room's shape, which would imply a built-in reverberation integrated into the spatializer. In other words, you don't "design" a room.
Jérémie Szpirglas: This also raises the question of maintaining consistency when the same musical work is performed in different venues: how can we stay true to the composer's vision?
Olivier Warusfel: This is one of the key aspects we're working on with Continuum: moving away from a purely artisanal approach by developing tools that help anticipate settings based on the performance space and technical setup. This also helps minimize feedback issues and reach the desired result more efficiently. To achieve this, we need to consider the simulated acoustic environment before the performance to ensure coherence between the sound and the world being represented. The goal is to make audio signal processing dependent on the environmental simulation, which means incorporating new formalisms into Spat beyond those currently available.
Réalisation artistique dans le cadre du projet Continuum : spectacle musique et vidéo de Murcof (Fernando Corona) et Simon Geilfus, Twin Color, à l'Espace de projection de l'Ircam le 29 novembre 2023 © Ircam-Centre Pompidou, photo : Philippe Barbosa
Jérémie Szpirglas: How do you go about representing and simulating an acoustic environment?
Markus Noisternig: We primarily simulate reverberation. Up to now, our approach has focused on directly manipulating various perceptual parameters, using algorithms to numerically model acoustic propagation phenomena in a given space.
Olivier Warusfel: This kind of manipulation of active reverberation is nothing new. The first instance, in what could be seen as a prophetic first step, was in 1984 at the Carrière de Boulbon in Avignon for Répons by Pierre Boulez. Since the quarry is an open-air space with almost no natural reverberation, we recreated the acoustics of a concert hall within it. But it was a one-time experiment—once the concert ended, everything was dismantled, and no further attempts were made.
Markus Noisternig: Another method, which we've been developing for years and will continue to refine, is the reproduction of an existing acoustic environment in order to manipulate it or extrapolate a similar space from it. This involves capturing the acoustic fingerprint of a given environment—such as a room. The process consists of measuring how the room responds to sound pulses from various points, using a spherical microphone array.
This is exactly what we did at the San Lorenzo church in Venice for Olga Neuwirth's Le Encantadas o le avventure nel mare delle meraviglie (2015): we measured the acoustic responses at nearly a hundred loudspeaker positions, as well as multiple positions of the spherical microphone array. Following this project, we began exploring ways to model and manipulate this impulse response. When reproducing the reverberation of a space, we may want to preserve its exact acoustic fingerprint or modify it—such as slightly reducing the reverberation by adjusting the envelope. It’s also possible to remove specific aspects. Since impulse response measurements capture sound in multiple directions, we can isolate certain reflections—such as those from the floor—and adjust or suppress them, as if the room were carpeted. In this regard, one of the key objectives of the Continuum project is to refine the estimation of various reverberation parameters, beyond just directional ones.
Jérémie Szpirglas: Once the acoustic fingerprint has been captured, it can be played back.
Olivier Warusfel: This is another key area of our research. Successfully implementing an active reverberation system requires anticipating potential instabilities and strategically positioning the loudspeakers to achieve the most even sound distribution in the space. Our goal here is to develop tools and methods that streamline implementation, expand the system's stability range, and allow us to fine-tune settings in the studio—minimizing the need for extensive on-site adjustments.
Markus Noisternig: For example, when Le Encantadas premiered at the concert hall of the Cité de la Musique, the challenge was to recreate the reverberation of San Lorenzo. To achieve a perfect match, we would have had to completely eliminate the hall’s natural reverb—an impossible task. The ideal solution would be to subtract one from the other or, alternatively, to enhance the concert hall’s reverberation by adding only the elements missing in comparison to San Lorenzo.
Olivier Warusfel: This question is more crucial than it seems because it highlights a fundamental limitation: you can't simply apply any acoustic environment to any other. In practice, within a given acoustic space, you can only recreate an environment with greater reverberation—in other words, you can only extend the reverberation time. For instance, we can simulate the acoustics of San Lorenzo in an open-air setting, but we could never recreate the openness of an outdoor space within San Lorenzo.
Markus Noisternig: In theory, it is possible, but it would require an infinite number of sound sources in infinite proximity to each other.
Jérémie Szpirglas: Another challenge lies in the psychological aspect of perception: whether we like it or not, the listener's eyes place them in the concert hall, not in the church.
Markus Noisternig : This is another area of research within the Continuum project that we plan to explore further. I find it quite unsettling myself. An initial experiment was conducted during the second Janus concert last June, where we applied the acoustic fingerprint of the Chapelle royal in the chateau of Versailles to the Espace de projection at IRCAM. Until now, we had only captured the sound of instruments using microphones placed close to the musicians and applied reverberation to those signals. However, with this new approach, if an audience member coughs, for example, the cough remains in the natural acoustics of the hall. By positioning a few microphones above the audience, we were able to apply the same acoustic response to ambient sounds as we did to the instruments. The result was striking—especially when the effect was switched off. However, this experiment also highlighted the significant challenges ahead, particularly in controlling feedback to avoid unwanted effects, such as resonance. During the Janus concert, the sound engineer played a crucial role in managing this balance, but the next step will be to automate and refine this process for greater precision. Photo: Répétition du concert Janus à l'Espace de projection, mai 2024 © Centre de Musique Baroque de Versailles, photo : Morgane Vie
Jérémie Szpirglas: Talking of the audience, when you capture the acoustic fingerprint of a venue, it's usually without an audience. Here again, there can be a discrepancy.
Olivier Warusfel: We are certainly working on compensating for this aspect. This ties in with what Markus was saying about manipulating or attenuating an acoustic fingerprint. It also represents one of the key challenges of archaeoacoustics—the reconstruction of acoustics that no longer exist or perhaps never existed in the first place.
For example, if we wanted to recreate the acoustics of the Colosseum in Rome at the height of the Roman Empire, with a full audience, the first step would be to calibrate our models by measuring the current acoustic fingerprint of the Colosseum as it stands today. We would then digitally introduce the missing elements to simulate its various past states—with or without an audience. The same approach applies when recreating the presence of an audience in an empty concert hall.
Jérémie Szpirglas: Augmented or virtual reality isn’t just confined to a performance venue; it can also be experienced at home, with a headset.
Olivier Warusfel: Indeed, this is another dimension explored within the Continuum project: ensuring the continuity of the sound experience from one place to another, all the way to home consumption of content. And of course, since this is IRCAM, everything happens in real-time. This new dimension forces us to undergo a radical paradigm shift because virtual reality implies an interactive environment. This means that the listener must be able to move within a sound scene, choose their perspective, and even change it dynamically. Until now, in sound recording and playback, the listener’s position was fixed—essentially, it was wherever the sound engineer was positioned. The broadcasted audio stream was static and linear. With virtual reality, this is no longer the case.
Markus Noisternig: This requires rethinking the entire process; the whole production chain must adapt. The sound engineer must be able to recreate an auditory experience from any possible trajectory within the sound scene—just like a character moving through a scene in a video game.
Olivier Warusfel: To give an example that speaks to IRCAM’s audience: a sound engineer’s traditional approach to immersing a listener in Répons would typically be to place them in the conductor’s position, with the orchestra in front and the soloists and loudspeakers surrounding them. However, Boulez’s own idea was that Répons should be experienced differently depending on the exact location from which it is heard in the concert hall. In some performances, the piece was even played twice so that the audience could change seats and discover a new perspective on the work. That’s precisely what we aim to recreate here, but with an added dimension: we want the listener to be able to virtually change their position during the performance, rather than only during intermission. It’s akin to visiting a museum, where one moves around a sculpture to appreciate it from different angles. The goal is to create a kind of open work, in which the listener can navigate freely.
Our ambition, then, is to develop a technique for capturing, archiving, and broadcasting sound that allows free movement within a sound scene. Of course, we can’t place an infinite number of microphones to capture the entire sound field in a concert hall. Instead, we need to be able to reconstruct the sound as it would be heard from a specific listening point, using recordings made with strategically placed microphones. And, once again, this must all happen in real time.
We already had a partial proof of concept with Murcof’s concert in the Espro last year: his performance, accompanied by video projection, was broadcast live in Studios 5 and 1. In Studio 1, it was played through the Ambisonic dome, while in Studio 5, it was experienced via a virtual reality headset. This was a successful demonstration of continuity between in-hall and remote individual broadcasting, highlighting the convergence between music creation tools and virtual reality production.
Markus Noisternig: We will get even closer to this with L’Ombre, a performance by choreographer Blanca Li and composer Édith Canat de Chizy, as it will be an augmented show. In the Espro, spectators will wear augmented reality headsets—allowing virtual elements to overlay the real world without replacing it—while Ambisonics will enhance the audio reality. Thanks to our latest hybrid reverberation algorithms, the sound design team will be able to create an acoustic and sonic language that is consistent with these visuals. But this performance can also be seen individually via a virtual reality headset, complemented by a binaural recording.
The Continuum project is coordinated by Ircam, in collaboration with Amadeus and VRtuoz.
Supported by the French government as part of the "Augmented Experience of Live Performance"program of the cultural and creative industries (CCI) sector of France 2030, operated by Caisse des Dépôts.