Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

marksverdhei 
posted an update 2 days ago
view post
Post
1893
Dear Hugging Face team, can we please have a way to archive hf repositories / spaces? I have a bunch of spaces that used to work but don't any more due to the hf space implementations changing and i think it would be good if I could archive those like in GitHub.

React to this post if you want to see this feature! 💡
Javedalam 
posted an update 3 days ago
view post
Post
2752
KittenTTS Nano — Tiny, Expressive, Practical

KittenTTS Nano is a lightweight, CPU-only text-to-speech model designed to prove that natural, expressive voices don’t require massive cloud stacks or GPUs. At roughly ~15M parameters, it runs fast on modest hardware, supports multiple expressive voices, and exposes simple controls for pacing and tone. This makes it ideal for edge devices, demos, and anyone who wants full control over TTS without latency, lock-in, or infrastructure overhead.

Try it here

Javedalam/KittenTTS

The model page

KittenML/kitten-tts-nano-0.2
  • 2 replies
·
raincandy-u 
posted an update 1 day ago
view post
Post
2275
Introducing Rain-v2: Democratizing LLM training on gaming GPUs! ⚡

​Following Rain-100M, we’re scaling up. Rain-v2 features a larger training dataset.

We’ve published a comprehensive blog covering the end-to-end journey—from raw data collection to rigorous evaluation and safety testing.

​HF Repo: 🤗 raincandy-u/Rain-v2

​Blog: 📚
https://angelkawaii.xyz/2026/01/29/rain-v2/

​Special thanks to the open-source community and the SmolLM2 team for their foundational work! 🚀

HuggingFaceTB

SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model (2502.02737)
RakshitAralimatti 
posted an update 4 days ago
view post
Post
2939
Just built my entire AI Engineer portfolio by pasting 2 links (GitHub and LinkedIn) into
moonshotai
Kimi 2.5.
That's it. That's the workflow.
Zero coding. Zero iteration. Zero "make the button bigger."
See for yourself: https://rakshit2020.github.io/rakshitaralimatti.github.io/

The model:
✅ Scraped my GitHub repos automatically
✅ Pulled my experience from LinkedIn
✅ Designed an Aurora Glass theme
✅ Mapped every skill to projects
✅ Added animations I'd never code myself


·
danielhanchen 
posted an update 4 days ago
prithivMLmods 
posted an update 3 days ago
view post
Post
2933
Daggr UI version of the Qwen3-TTS demo.🔥
(custom voice, voice design, qwen3-asr and voice cloning) nodes.
No remote spaces used for API inference; all functions run in-app fn.
Powered by t4-m and built with daggr@0.5.2 and gradio@6.

👉Demo: prithivMLmods/Qwen3-TTS-Daggr-UI
⭐Github: https://github.com/PRITHIVSAKTHIUR/Qwen3-TTS-Daggr-UI
  • 1 reply
·
unmodeled-tyler 
posted an update 1 day ago
view post
Post
1696
Hey Hugging Face!

Type 2 in Project Enneagram just came out: vanta-research/PE-Type-2-Alma-4B

PE-Type-2-Alma-4B is the second release in Project Enneagram, where I'm finetuning each of the 9 Enneagram types onto Gemma 3 4B

Type 2-Alma is designed to exhibit the "helper" profile:
- Empathetic Support: Emotional attunement - managing bad days, anxiety, grief, rejection, or feeling unseen
- Interpersonal Connections: Relationship building - making friends, listening, conflict, reciprocity, apologies
- Generous Guidance: Going above and beyond - cover letters, meal prep, gardening, wedding speeches, etc
- Identity: Alma's name, tone, and conversational style

Type 3 soon!

Csplk 
posted an update 1 day ago
view post
Post
1720
Was tinkering with a Daggr node generator script earlier today ( Csplk/DaggrGenerator )and started on a GUI for it for folks who are not comfy with writing code and like a GUI instead for something to motivate working on some Daggr stuff.
*Will have time later to keep working on it so don’t hesitate to comment with bugs or issues found if trying it out.*

Csplk/DaggrGenerator

Thanks @merve @ysharma @abidlabs and team daggr for making daggr :)

kanaria007 
posted an update 1 day ago
view post
Post
314
✅ New Article: *Evaluation as a Goal Surface* (v0.1)

Title:
🧪 Evaluation as a Goal Surface: Experiments, Learning Boundary, and ETH-Aware A/B
🔗 https://huggingface.co/blog/kanaria007/evaluation-as-a-goal-surface

---

Summary:
Most “evaluation” quietly collapses into a single number—and then we optimize the wrong thing.
This article reframes evaluation as a *goal surface*: multi-objective, role-aware, and ethics-bounded.

In SI-Core terms, experiments become *first-class Jumps (E-Jumps)* with explicit contracts, traces, and gates—so you can run A/B tests, shadow evals, and adaptive rollouts *without violating ETH, confusing principals/roles, or learning from unsafe data*.

> Don’t optimize a metric.
> Optimize a goal surface—under explicit constraints.

---

Why It Matters:
• Prevents Goodhart failures by treating evaluation as *multi-goal + constraints*, not a scalar leaderboard
• Makes experimentation auditable: *EvalTrace* answers “what changed, for whom, why, and under what policy”
• Enables *ETH-aware A/B*: assignment, exposure, and stopping rules respect safety/fairness boundaries
• Connects experiments to governance: *Learning Boundary (LB)* + rollout control (PoLB) instead of “ship and pray”

---

What’s Inside:
• What EVAL is in SI-Core, and *who* is being evaluated (agents / roles / principals)
• “Experiments as Jumps”: *E-Jump request/draft* patterns and contracts
• *ETH-aware variant testing* (including ID/role constraints at assignment time)
• Shadow evaluation + off-policy evaluation (how to learn without unsafe intervention)
• Role & persona overlays for EVAL (role-aware scoring, persona-aware reporting)
• *EvalTrace* for audits + incident review, plus “evaluate the evaluators” test strategies
• Practical experiment design: power/sample size, early stopping, multi-objective bandits, causal inference

---

📖 Structured Intelligence Engineering Series
this is the *how-to-design / how-to-run experiments safely* layer.
  • 2 replies
·
MonsterMMORPG 
posted an update 1 day ago
view post
Post
357
LTX 2 & Z Image Base Full Tutorial + Audio to Video Lip Sync + ComfyUI + SwarmUI + Windows + Cloud

Full tutorial link > https://www.youtube.com/watch?v=SkXrYezeEDc

Info
LTX 2 is the newest state of the art (SOTA) Open Source video generation model and tutorial will show you how to use it with very best and most performant way in ComfyUI and also in SwarmUI. Moreover, Z Image Base model published and I will show how to use Z Image Base with most amazing preset and workflow as well. Furthermore, this tutorial will show you how to install, update, setup, download ComfyUI and SwarmUI and models and presets and workflows both on Windows and on RunPod, Massed Compute and SimplePod. Linux users can use Massed Compute scripts and installers directly. This is a masterpiece entire lecture level complete tutorial. This video will kickstart your AI journey 100x. Both local Windows and Cloud.

45 Second Raw Demo Video

This video made with text + image + audio = lip synched and animated video at once

See video below
  • 1 reply
·