Instructions to use mudler/vibevoice.cpp-models with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- VibeVoice
How to use mudler/vibevoice.cpp-models with VibeVoice:
import torch, soundfile as sf, librosa, numpy as np from vibevoice.processor.vibevoice_processor import VibeVoiceProcessor from vibevoice.modular.modeling_vibevoice_inference import VibeVoiceForConditionalGenerationInference # Load voice sample (should be 24kHz mono) voice, sr = sf.read("path/to/voice_sample.wav") if voice.ndim > 1: voice = voice.mean(axis=1) if sr != 24000: voice = librosa.resample(voice, sr, 24000) processor = VibeVoiceProcessor.from_pretrained("mudler/vibevoice.cpp-models") model = VibeVoiceForConditionalGenerationInference.from_pretrained( "mudler/vibevoice.cpp-models", torch_dtype=torch.bfloat16 ).to("cuda").eval() model.set_ddpm_inference_steps(5) inputs = processor(text=["Speaker 0: Hello!\nSpeaker 1: Hi there!"], voice_samples=[[voice]], return_tensors="pt") audio = model.generate(**inputs, cfg_scale=1.3, tokenizer=processor.tokenizer).speech_outputs[0] sf.write("output.wav", audio.cpu().numpy().squeeze(), 24000) - Notebooks
- Google Colab
- Kaggle
README: add LocalAI team attribution
Browse files
README.md
CHANGED
|
@@ -15,6 +15,8 @@ base_model:
|
|
| 15 |
|
| 16 |
# vibevoice.cpp — quantized model bundle
|
| 17 |
|
|
|
|
|
|
|
| 18 |
Quantized GGUF weights for [vibevoice.cpp](https://github.com/mudler/vibevoice.cpp),
|
| 19 |
a C/C++ port of Microsoft VibeVoice (TTS + ASR) on top of `ggml`.
|
| 20 |
|
|
|
|
| 15 |
|
| 16 |
# vibevoice.cpp — quantized model bundle
|
| 17 |
|
| 18 |
+
**Brought to you by the [LocalAI](https://github.com/mudler/LocalAI) team** — the creators of LocalAI, the open-source AI engine that runs any model — LLMs, vision, voice, image, video — on any hardware. No GPU required.
|
| 19 |
+
|
| 20 |
Quantized GGUF weights for [vibevoice.cpp](https://github.com/mudler/vibevoice.cpp),
|
| 21 |
a C/C++ port of Microsoft VibeVoice (TTS + ASR) on top of `ggml`.
|
| 22 |
|