| | --- |
| | license: mit |
| | pipeline_tag: video-to-audio |
| | --- |
| | |
| | # Hear-Your-Click: Interactive Object-Specific Video-to-Audio Generation |
| |
|
| | This repository contains the official model for **Hear-Your-Click**, an interactive framework designed for object-specific video-to-audio (V2A) generation. It enables users to generate sounds for specific objects within a video simply by clicking on the frame, addressing the limitations of global video information in complex scenes. |
| |
|
| | **[📚 Paper](https://huggingface.co/papers/2507.04959)** | **[💻 GitHub Repository](https://github.com/SynapGrid/Hear-Your-Click-2024)** |
| |
|
| | <p align="center"> |
| | <img src="https://github.com/user-attachments/assets/2ca49ab5-80ca-42c4-b9a5-9dc7959ac358"> |
| | </p> |
| | |
| | ## About Hear-Your-Click |
| |
|
| | Hear-Your-Click introduces several key innovations to improve V2A generation: |
| | - **Object-aware Contrastive Audio-Visual Fine-tuning (OCAV)** with a **Mask-guided Visual Encoder (MVE)** to obtain object-level visual features aligned with audio. |
| | - Two tailored data augmentation strategies: **Random Video Stitching (RVS)** and **Mask-guided Loudness Modulation (MLM)**, which enhance the model's sensitivity to segmented objects. |
| | - A new evaluation metric, the **CAV score**, designed to measure audio-visual correspondence more accurately. |
| |
|
| | This framework offers more precise control and significantly improves generation performance across various metrics. |
| |
|
| | ## Installation |
| |
|
| | To set up the Hear-Your-Click environment, follow these steps: |
| |
|
| | 1. **Clone the repository**: |
| | ```bash |
| | git clone https://github.com/SynapGrid/Hear-Your-Click-2024.git |
| | cd Hear-Your-Click-2024 |
| | ``` |
| | |
| | 2. **(Optional) Create a Conda environment**: |
| | ```bash |
| | conda env create -n hyc python=3.9.11 |
| | conda activate hyc |
| | ``` |
| | |
| | 3. **Install dependencies**: |
| | ```bash |
| | pip install -r requirements.txt |
| | ``` |
| | |
| | ## Model Checkpoints |
| |
|
| | 1. **Download the model weights** and place them in `./hyc_inference/inference/ckpt/`: |
| | * [epoch=000059.ckpt](https://drive.google.com/file/d/1QX24gEmN-cG03NlO0zT1geK1eUgOqDtk/view?usp=drive_link) |
| | * [epoch_10.pt](https://drive.google.com/file/d/15tbqXR-99QNg-Il6wxPD66q4EM4UkVvJ/view?usp=drive_link) |
| | * [eval_classifier.ckpt](https://huggingface.co/SimianLuo/Diff-Foley/resolve/main/diff_foley_ckpt/eval_classifier.ckpt) |
| | * [double_guidance_classifier.ckpt](https://huggingface.co/SimianLuo/Diff-Foley/resolve/main/diff_foley_ckpt/double_guidance_classifier.ckpt) |
| |
|
| | You can use `gdown` and `wget` for convenient downloading: |
| | ```bash |
| | pip install gdown |
| | |
| | cd ./hyc_inference/inference/ckpt |
| | |
| | gdown https://drive.google.com/uc?id=1QX24gEmN-cG03NlO0zT1geK1eUgOqDtk |
| | |
| | gdown https://drive.google.com/uc?id=15tbqXR-99QNg-Il6wxPD66q4EM4UkVvJ |
| | |
| | wget https://huggingface.co/SimianLuo/Diff-Foley/resolve/main/diff_foley_ckpt/eval_classifier.ckpt |
| | |
| | wget https://huggingface.co/SimianLuo/Diff-Foley/resolve/main/diff_foley_ckpt/double_guidance_classifier.ckpt |
| | ``` |
| | |
| | 2. **Download additional model weights** and place them in `./checkpoints`: |
| | * [clap_clip.pt](https://github.com/MCR-PEFT/C-MCR/blob/main/checkpoints/clap_clip.pt) |
| | * [laion_clap_fullset_fusion.pt](https://huggingface.co/lukewys/laion_clap/blob/main/630k-fusion-best.pt) |
| | * [clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32) |
| |
|
| | ## Inference Command |
| |
|
| | Launch the inference demo using the following command: |
| | ```bash |
| | python app.py --device cuda:0,1 --sam_model_type vit_b |
| | ``` |
| |
|
| | ## Citation |
| |
|
| | If you find this work useful for your research or applications, please cite our paper: |
| |
|
| | ```bibtex |
| | @misc{liang2025hearyourclickinteractivevideotoaudiogeneration, |
| | title={Hear-Your-Click: Interactive Video-to-Audio Generation via Object-aware Contrastive Audio-Visual Fine-tuning}, |
| | author={Yingshan Liang and Keyu Fan and Zhicheng Du and Yiran Wang and Qingyang Shi and Xinyu Zhang and Jiasheng Lu and Peiwu Qin}, |
| | year={2025}, |
| | eprint={2507.04959}, |
| | archivePrefix={arXiv}, |
| | primaryClass={cs.CV}, |
| | url={https://arxiv.org/abs/2507.04959}, |
| | } |
| | ``` |