-
WithinUsAI/Gemma4-Overlooked.Thinker.Uncensored-E2B.gguf
5B β’ Updated β’ 1.99k β’ 5 -
WithinUsAI/Gemma4-Most.Seen.Unseen.Reasoner-E2B
5B β’ Updated β’ 248 β’ 4 -
WithinUsAI/Opus4.7-GODs.Ghost.Codex-4B.GGuF
Text Generation β’ 4B β’ Updated β’ 11.4k β’ 36 -
WithinUsAI/IBM4.1-Unnoticed.Thinker.Uncensored-3B.gguf
3B β’ Updated β’ 2
AI & ML interests
*Agentic AI *Code LLMs *Evaluation / SWE-bench **Tool Calling *Secure Code Generation *Mixture-of-Experts (MoE) *Continual Learning *MLOps / CI Integration *Recursive Seed AI
Recent Activity
Frontier AI Systems for Agentic & Self-Evolving Intelligence
WithinUsAI is an independent AI research organization building beyond traditional machine learning pipelines. We design systems that do not only generate outputs β they think, construct, verify, and recursively improve through structured experience.
Our work spans:
- High-signal datasets
- Agentic coding systems
- Recursive intelligence architectures
- Evaluation-driven AI engineering
- Model transformation and synthesis
π¬ Core Vision
We believe traditional large language models are approaching structural limits in their ability to learn, adapt, and evolve. Instead of treating intelligence as static, we explore Developmental Autopoiesis β AI systems that continuously evolve through recursion, memory, and self-generated experience.
This shifts AI from:
- static training β continuous adaptation
- single-pass inference β recursive cognition loops
- scaling parameters β designing learning systems
βοΈ Research Focus
π Recursive Intelligence Systems
We build architectures that simulate self-improving cognition through:
- Recursive Seed AI systems (TRM-style models)
- External memory indexing frameworks
- Self-reinforcing computation loops
- Noogenesis.Concordia.Mind.XI experimental architecture
π» Agentic AI & Code Systems
We design models that behave like software engineers:
- Tool-using workflows
- Code generation + verification
- Diff-based patching systems
- Test-driven reasoning (βtests-as-truthβ)
π High-Signal Dataset Engineering
Our datasets are designed as training environments, not just corpora:
- Python + software engineering datasets
- Agentic reasoning traces
- Structured evaluation benchmarks
- Synthetic multi-domain reasoning corpora
- Complex technical and historical text mixtures
β‘ Efficient AI Deployment
We prioritize systems that can actually run and iterate:
- GGUF / llama.cpp ecosystems
- Low-cost inference pipelines
- Multi-GPU & TPU optimized training workflows
- Fast experimental cycles over large-scale compute
𧬠Model Engineering & Transformation
A core part of WithinUsAI research is model transformation rather than just training.
π§ Fine-Tuning & Training LLMs
We design and execute:
- Instruction tuning pipelines
- Domain-specific adaptation
- Reasoning and coding specialization training
- Dataset-driven behavioral shaping
π Merging LLMs
We explore:
- Weight merging techniques
- Architecture blending across model families
- Behavior fusion between reasoning + coding models
- Cross-model capability transfer
π§ Mixture of Experts (MoE) Model Merging
We develop and experiment with:
- Sparse expert routing systems
- MoE model merging strategies
- Expert specialization for coding, reasoning, and tool use
- Compute-efficient activation-based intelligence
This allows us to build systems where different βparts of intelligenceβ activate only when needed.
π§ Flagship Work
π₯ Genesis AI Code Series
Progressive dataset scaling initiative:
- Demo β 10K β 50K β 100K
- Designed for frontier coding agent training
𧬠Core Experimental Systems
- GODs.Ghost.Codex.XI (recursive architecture lineages)
- MoE sparse reasoning models
- Agentic coding frameworks
- Recursive seed AI prototypes
π€ Model Ecosystem
WithinUsAI develops interconnected model families:
π§ Reasoning Models
- Long-context reasoning systems
- Uncensored experimental variants
- Structured inference models
π» Coding Models
- 0.4B β 8B coding systems
- MoE-based efficient coders
- LLaMA, Qwen, Gemma-based derivatives
π€ Agentic Systems
- Hermes-style structured agents
- Claude/Gemini-inspired hybrid agents
- Space-agent reasoning architectures
π Vision
We are working toward a new category of AI: Systems that do not just predict text β but recursively construct better versions of themselves.
The future is not one model. It is a network of evolving, specialized intelligence systems working together.
π Featured Projects
- GODs.Ghost.Codex.XI β recursive architecture framework
- PythonGOD-25k β high-density coding dataset
- MoE Efficient Coders β sparse expert systems
- Genesis AI Code Series β scalable reasoning dataset pipeline
π Acknowledgements & Shout-Outs
WithinUsAI extends our sincere gratitude to the entire open-source community and the major providers who make this research possible. Thank you for letting us experiment with your foundational models, platforms, and datasets!
A special shout-out to:
- Google (DeepMind ecosystems)
- OpenAI
- Meta AI
- Microsoft
- IBM
- NVIDIA
- xAI
- Alibaba
- Mistral AI
- DeepSeek
- Anthropic
- Amazon (AWS AI / Bedrock ecosystem)
- Hugging Face
- Big Code
- Nous Research
-
WithinUsAI/Gemma4-Overlooked.Thinker.Uncensored-E2B.gguf
5B β’ Updated β’ 1.99k β’ 5 -
WithinUsAI/Gemma4-Most.Seen.Unseen.Reasoner-E2B
5B β’ Updated β’ 248 β’ 4 -
WithinUsAI/Opus4.7-GODs.Ghost.Codex-4B.GGuF
Text Generation β’ 4B β’ Updated β’ 11.4k β’ 36 -
WithinUsAI/IBM4.1-Unnoticed.Thinker.Uncensored-3B.gguf
3B β’ Updated β’ 2