Core Concepts
This research-creation project aims to extend the concept of audiovisual corpus-based concatenative synthesis to video, developing tools for real-time analysis, mapping, and performance of synchronized audio and visual elements.
Abstract
This research-creation project explores the integration of sound and image through the use of corpus-based concatenative synthesis techniques. The author first provides an overview of the scientific and artistic context, covering topics such as concatenative synthesis, multimodal perception, and the history of visual music and videomusic.
The core of the project involves the development of four video analysis modules (ViVo) in the Max/MSP/Jitter environment. These modules analyze various visual properties like warmness, sharpness, detail, and optical flow, which can then be mapped to control parameters for audio synthesis using the CataRT corpus-based concatenative synthesis tool.
The author also discusses the development of a VJing tool (ViJo) that allows for the real-time manipulation and performance of the audiovisual content. Key considerations include the choice of control parameters, the integration and diffusion of the system, and the adaptation of MIDI controllers.
Finally, the author describes the aesthetic approach and the process of creating an immersive audiovisual artwork using the developed tools. This includes the constitution of sound and visual corpora, the manipulation of the tools, and a self-analysis of the work from a mediological perspective, exploring the temporal evolution, the metaphorical link between the audience and the "musician", and the communication of the event.