Display Technology

We explore display technology that will allow for augmenting Digital Musical Instruments on a large scale and in a consistent way for all spectators, while preserving the physicality of instruments and performances.


Reflets, a mixed-reality environment for musical performances that allows for freely displaying virtual content on stage, such as 3D virtual musical interfaces or visual augmentations of instruments and performers. It relies on spectators and performers revealing virtual objects by slicing through them with body parts or objects, and on planar slightly reflective transparent panels that combine the stage and audience spaces. In this paper, we describe the approach and implementation challenges of Reflets. We then demonstrate that it matches the requirements of musical performances. It allows for placing virtual content anywhere on large stages, even overlapping with physical elements and provides a consistent rendering of this content for large numbers of spectators. It also preserves non-verbal communication between the audience and the performers, and is inherently engaging for the spectators. We finally show that Reflets opens musical performance opportunities such as augmented interaction between musicians and novel techniques for 3D sound shapes manipulation.

Semi-transparent mirrors / Optical combiners

For our third prototype, we started exploring semi-transparent mirrors combined with volumetric displays, in the context of a more generic paper. Semi-transparent mirrors create a shared space where physical objects placed on both sides can overlap. We designed a display where the Digital Instrument on one side is augmented using both projection mapping, and a volumetric display (a DepthCube with three layers) on the other side. It provides informations on the instrument to the audience following the Rouages approach and also adds 3D interaction possibilities for the musician. Augmentations from both sides are blended by the combiner, so they are consistently seen by any number of users, independently of their location or, even, the side of the combiner through which they are looking. Therefore the alignment between the physical instrument and the augmentations remains correct for all spectators and the virtual content appears at the correct depth thanks to the use of volumetric displays. However, due to the constraints of volumetric displays, the scale of the augmentations remains small.

First prototypes

The first prototype display was a simple computer screen. It relied on user tracking to adapt the perspective of the 3D rendering so that the virtual augmentations were always aligned with the physical instrument, inside of the box. However, this prototype only allows for a single user at a time.
The second prototype was also based on a simple computer screen but captured a 3D reconstruction of the musicians hands and controller to integrate them with the virtual augmentations. The alignment was therefore always right but the rendering was correct only from a sweet spot in the audience and the physicality of the performance was lost.

In collaboration with

Diego Martinez Plasencia, Abhijit Karnik, Sriram Subramanian, University of Bristol

Images, Sounds and Videos available under the Free Art License
Website powered by PmWiki