• Home
  • Abstract
  • Assignment
  • Seeing and Hearing: Interactive Audiovisual Performance with Video Sonification in Boundary Synthesizer

Seeing and Hearing: Interactive Audiovisual Performance with Video Sonification in Boundary Synthesizer

Katsufumi MATSUI

Graduate School of Interdisciplinary

Information Studies,

The University of Tokyo

This email address is being protected from spambots. You need JavaScript enabled to view it.

Keywords: video sonification, interactive sonfication, boundary synthesizer, sound-image relationship, audiovisual performance

This research examines a video sonification framework, that is, seeing and hearing performance that transforms the visual “boundary” in moving sceneries’ images such as cityscapes, to sound waves. It creates correspondence between sound and image for an audiovisual performance. This is important because previous research argued that a vast majority of existing image sonification uses color-to-sound or raster scanning approach. In addition, existing research does not make the strong connection between sound and image embodied in this approach. Video sonification was also influenced by Matsumura’s image sonification, “Dip in the wave” and “Graph-Sono” (2008), converting outline to sound wave simply. Our sound synthesis technique was also based on “Scanned synthesis” (2000), the effective system of controlling a dynamic sonic wave.

For this research, overall system implementation was developed as audiovisual performance system and sound installation. Users can play with Boundary Synthesizer by changing video inputs, controlling the frequency and manipulating the image data with video effects.

This research discovered that Boundary Synthesizer was an intuitive and expressive instrument that generates dynamic sounds and images at the same time. Further research will be needed to evaluate the correspondence between sound and image results and intuitiveness in this interface.