iQuery: Instruments As Queries for Audio-Visual Sound Separation

1UC San Diego, 2CUHK MMLab, 3National University of Singapore, 4ShanghaiTech University, 5University of Pennsylvania

Abstract

Current audio-visual separation methods share a standard architecture design where an audio encoder-decoder network is fused with visual encoding features at the encoder bottleneck. This design confounds the learning of multi-modal feature encoding with robust sound decoding for audio separation. To generalize to a new instrument, one must fine-tune the entire visual and audio network for all musical instruments.

We re-formulate the visual-sound separation task and propose Instruments as Queries (iQuery) with a flexible query expansion mechanism. Our approach ensures cross-modal consistency and cross-instrument disentanglement. We utilize “visually named” queries to initiate the learning of audio queries and use cross-modal attention to remove potential sound source interference at the estimated waveforms. To generalize to a new instrument or event class, drawing inspiration from the text-prompt design, we insert additional queries as audio prompts while freezing the attention mechanism.

Experimental results on three benchmarks demonstrate that our iQuery improves audio-visual sound source separation performance.

BibTeX

@InProceedings{Chen_2023_CVPR,
    author    = {Chen, Jiaben and Zhang, Renrui and Lian, Dongze and Yang, Jiaqi and Zeng, Ziyao and Shi, Jianbo},
    title     = {iQuery: Instruments As Queries for Audio-Visual Sound Separation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {14675-14686}
}