NFTCID
NFTCID
AI & ML interests
None yet
Recent Activity
reacted to Crownelius's post with ๐ 4 days ago
[DAY ONE] PROJECT CROWFEATHER 4/30/2026
...The day I forgot to attach wandb.ai
Just dropped Crowfeather-50m, the first checkpoint in a series, and yeah, no graphs.
https://huggingface.co/Crowfeather/Crowfeather-50m
54.5M params. Pretrain only. 17,500 steps banked on FineWeb-edu before Thunder credits ran dry. About 2.3B tokens, no SFT yet.
Architecture: Gemma-4 alternating sliding/global attention (1024 window, last layer always global) plus DeepSeek-V4 Muon optimizer plus WSD scheduler plus Gemma-2 logit soft-cap plus PaLM z-loss. Recipe in the model card.
What it can do: writes grammatical English. Knows that France has Rhine-adjacent monasteries (it picked Rouen instead of Paris but the vocabulary is in there). Tells stories about Mr. Fabien.
What it can't do yet: facts, code, math. Base LM, no SFT, no instruction tuning.
The series:
Every additional training run becomes another model card here
Every model card gets a matching post on this profile
Continuation goes to Colab next, picking up from step 17500 out of 100k
Limited to one post a day on Hugging Face, so updates will trickle out at that pace. Follow [@Crownelius](https://huggingface.co/Crownelius) and [@Crowfeather](https://huggingface.co/Crowfeather) if you want to watch this thing learn in public. Next drop will either come with the finished pre-train or whatever step I land on before the bank takes my credit card away.
Graphs will be available on my NEXT model lol
-Shane
reacted to ManniX-ITA's post with ๐ 4 days ago
๐ Two releases this week pushing merge methodology forward.
โถ Qwen3.6-27B-Omnimerge-v4-MLP
https://huggingface.co/ManniX-ITA/Qwen3.6-27B-Omnimerge-v4
Same-base DARE-TIES merge of Qwen3.6-27B + 3 fine-tunes (rico03 Claude distill, Esper3.1, kai-os Opus reasoning anchor) via my Omnimerge_v2 method (OBIM-lite + DAREx-q + EMR election).
Hit a Qwen3.6-specific fragility: hyperparams that work flawlessly on 3.5 produced 80% unclosed-<think> on 3.6, collapsing pass@1 to ~20%. Per-tensor delta forensics localized the failure to mlp.{gate,up,down}_proj in
layers 27โ52. Fix: MLP-passthrough surgery โ copy MLPs verbatim from base, keep merged attn + linear_attn. Leak โ 0%.
Q6_K results (vs Qwen3.6 base / vs Omnimerge-v2 on Qwen3.5):
โข HumanEval: 84.76% (= base, +5.49 pp vs v2)
โข MBPP corrected: 73.40% (+15.80 pp vs base, โ v2)
โข GPQA Diamond: ~84.75% partial 192/198 (+15.5 pp vs v2)
โถ Qwen3.5-4B Importance-Signal Study (M1..M5)
Controlled 5-way comparison: same Qwen3.5-4B base, same 2 fine-tunes (Jackrong Claude-4.5 distill + Crow Opus-4.6 distill), only the importance signal driving DARE-TIES sparsification varies.
Q6_K HE / MBPP pass@1:
โข M1 Vanilla DARE-TIES โ 51.22 / 47.00
โข M2 OMv2 (no signal) โ 52.44 / 49.40
โข M3 OMv2 + Fisher โ 57.93 ๐ฅ / 48.80
โข M4 mergekit ex-LRP (PR #682) โ 51.22 / 49.40
โข M5 OMv2 + LRP โ 53.05 / 51.40 ๐ฅ
Findings: Fisher wins HE (+4.88 pp over vanilla), LRP wins MBPP (+2.60 pp). Both signals + Omnimerge_v2 recipe beat vanilla. To make multimodal-LM ex-LRP work end-to-end against Qwen3_5ForConditionalGeneration, I filed
5 patches against arcee-ai/mergekit PR #682 + 1 against rachtibat/lxt.
All five Mx checkpoints + Fisher/LRP signal safetensors + reproducer scripts published.Organizations
None yet