Jack Voide

Mindweller

AI & ML interests

None yet

Recent Activity

reacted to ManniX-ITA's post with ๐Ÿ‘ 1 day ago
๐Ÿš€ Two releases this week pushing merge methodology forward. โ–ถ Qwen3.6-27B-Omnimerge-v4-MLP https://huggingface.co/ManniX-ITA/Qwen3.6-27B-Omnimerge-v4 Same-base DARE-TIES merge of Qwen3.6-27B + 3 fine-tunes (rico03 Claude distill, Esper3.1, kai-os Opus reasoning anchor) via my Omnimerge_v2 method (OBIM-lite + DAREx-q + EMR election). Hit a Qwen3.6-specific fragility: hyperparams that work flawlessly on 3.5 produced 80% unclosed-<think> on 3.6, collapsing pass@1 to ~20%. Per-tensor delta forensics localized the failure to mlp.{gate,up,down}_proj in layers 27โ€“52. Fix: MLP-passthrough surgery โ€” copy MLPs verbatim from base, keep merged attn + linear_attn. Leak โ†’ 0%. Q6_K results (vs Qwen3.6 base / vs Omnimerge-v2 on Qwen3.5): โ€ข HumanEval: 84.76% (= base, +5.49 pp vs v2) โ€ข MBPP corrected: 73.40% (+15.80 pp vs base, โ‰ˆ v2) โ€ข GPQA Diamond: ~84.75% partial 192/198 (+15.5 pp vs v2) โ–ถ Qwen3.5-4B Importance-Signal Study (M1..M5) Controlled 5-way comparison: same Qwen3.5-4B base, same 2 fine-tunes (Jackrong Claude-4.5 distill + Crow Opus-4.6 distill), only the importance signal driving DARE-TIES sparsification varies. Q6_K HE / MBPP pass@1: โ€ข M1 Vanilla DARE-TIES โ†’ 51.22 / 47.00 โ€ข M2 OMv2 (no signal) โ†’ 52.44 / 49.40 โ€ข M3 OMv2 + Fisher โ†’ 57.93 ๐Ÿฅ‡ / 48.80 โ€ข M4 mergekit ex-LRP (PR #682) โ†’ 51.22 / 49.40 โ€ข M5 OMv2 + LRP โ†’ 53.05 / 51.40 ๐Ÿฅ‡ Findings: Fisher wins HE (+4.88 pp over vanilla), LRP wins MBPP (+2.60 pp). Both signals + Omnimerge_v2 recipe beat vanilla. To make multimodal-LM ex-LRP work end-to-end against Qwen3_5ForConditionalGeneration, I filed 5 patches against arcee-ai/mergekit PR #682 + 1 against rachtibat/lxt. All five Mx checkpoints + Fisher/LRP signal safetensors + reproducer scripts published.
View all activity

Organizations

ML intern explorers's profile picture