Process record of the OCEAN DANCE experiment presented at the DANC3 & T3CH seminar. The construction of the experiment presented starts from motion capture to implementation on the Spatial.io platform.
The aim of this experiment was to create spaces for visualizing dance in virtual reality. Specifically in this case, the capture of movement and expression of the dancers Helena Bevilaqua and Fabiano Nunes, evaluating the perception of presence, aesthetic fruition and usability.
This process was carried out by the Space-XR group, which includes Jorge Lopes, Luiz Velho, Sergio Azevedo, Affonso Beato, Gerson Ribeiro, Vinícius Arcoverde, Thaisa Martins, Carolina Navarro, Fabio Suim, Mariana Duarte, Orlando Grillo and Gabriel Cardoso.
MoCap
The motion captures (MoCap) were carried out in the Visgraf laboratory. Using markers and infrared cameras from the Optitrack system, different sequences were captured. In the video example below, Helena Bevilaqua’s performance is captured.
Prototype
A prototype of the VR environment was modeled in Blender, for discussion among the team, where the users’ navigation possibilities, characteristics of the Spatial platform and intentions of the experiment could be discussed within the spatial platform, first as a thumbnail and then as applied environment.
Visual conception
Both the space and the avatars use reflective shaders to create a playful experience of detachment from reality, avoiding sensations such as the uncanny valley and providing a coherent relationship with the environment. The visual effect caused is that everything is a mirror, both observing the presence of mirrors in dance environments and reflecting on the interaction between movement, form and space. This conception was developed during a mocap study, prior to these experiments described, where dancer and researcher Thaisa Martins experimented with the sensitivity of the capture system, as shown in the video and spatial.io link below.
Check out my space in the Spatial metaverse! https://www.spatial.io/s/Ocean-Dance-64447a7aadbcec330cc09838?share=2108112957746106208 via @spatial_io
Retarget
We generate the model of the dancers. MB-Lab was used, an additional program to Blender for generating humanoids. Also in Blender, the movement contained in the .BVH file is transferred to the model. Once the model was produced, it was retargeted to match the hierarchy of bone structures from the capture tools, using the Rokoko addon. Below are short sequences of the pure mocap file of Fabiano Nunes’ performance and then the capture applied to the model.
The Spatial.io platform is a metaverse platform that enables user interaction through browsers, apps and virtual reality, using Meta Quest 2. The experience was created in two variations, including Helena and Fabiano’s performance in each. It was presented at the DANC3 & T3CH event and was attended by around 20 people. Below are vídeos recordings of the two implementations and links to the spatial environments.
Visit Spatial Dance Spaces below:
Fabiano Nunes
Helena Bevilaqua
also there is a video with coreography
This experience demonstrates that the use of mocap brings the possibility of artistic expression in expanded reality environments. In the same way that today we see dance on social video networks such as Tik Tok as an individual and cultural expression, it is possible that in three-dimensional environments this use will be similar. We can also consider that they may have interactive and real-time variations, i.e. live. While the Spatial.io platform does not yet offer mocap in real time, VRChat already offers this possibility through transmission via OSC. Both platforms also offer the distribution of “expression” gestures to users’ avatars. These two alternatives for using mocap are alternatives for expressing oneself through movement in a three-dimensional environment.
For study only, this MoCap and visual concept was also carried out in VR-Chat, evaluating spatial interactivity characteristics and differences in the production flow of the virtual environment.