AI & ML interests

*Agentic AI *Code LLMs *Evaluation / SWE-bench **Tool Calling *Secure Code Generation *Mixture-of-Experts (MoE) *Continual Learning *MLOps / CI Integration *Recursive Seed AI

Recent Activity

Organization Card

Frontier AI Systems for Agentic & Self-Evolving Intelligence

WithinUsAI is an independent AI research organization building beyond traditional machine learning pipelines. We design systems that do not only generate outputs β€” they think, construct, verify, and recursively improve through structured experience.

Our work spans:

  • High-signal datasets
  • Agentic coding systems
  • Recursive intelligence architectures
  • Evaluation-driven AI engineering
  • Model transformation and synthesis

πŸ”¬ Core Vision

We believe traditional large language models are approaching structural limits in their ability to learn, adapt, and evolve. Instead of treating intelligence as static, we explore Developmental Autopoiesis β€” AI systems that continuously evolve through recursion, memory, and self-generated experience.

This shifts AI from:

  • static training β†’ continuous adaptation
  • single-pass inference β†’ recursive cognition loops
  • scaling parameters β†’ designing learning systems

βš™οΈ Research Focus

πŸ” Recursive Intelligence Systems

We build architectures that simulate self-improving cognition through:

  • Recursive Seed AI systems (TRM-style models)
  • External memory indexing frameworks
  • Self-reinforcing computation loops
  • Noogenesis.Concordia.Mind.XI experimental architecture

πŸ’» Agentic AI & Code Systems

We design models that behave like software engineers:

  • Tool-using workflows
  • Code generation + verification
  • Diff-based patching systems
  • Test-driven reasoning (β€œtests-as-truth”)

πŸ“š High-Signal Dataset Engineering

Our datasets are designed as training environments, not just corpora:

  • Python + software engineering datasets
  • Agentic reasoning traces
  • Structured evaluation benchmarks
  • Synthetic multi-domain reasoning corpora
  • Complex technical and historical text mixtures

⚑ Efficient AI Deployment

We prioritize systems that can actually run and iterate:

  • GGUF / llama.cpp ecosystems
  • Low-cost inference pipelines
  • Multi-GPU & TPU optimized training workflows
  • Fast experimental cycles over large-scale compute

🧬 Model Engineering & Transformation

A core part of WithinUsAI research is model transformation rather than just training.

🧠 Fine-Tuning & Training LLMs

We design and execute:

  • Instruction tuning pipelines
  • Domain-specific adaptation
  • Reasoning and coding specialization training
  • Dataset-driven behavioral shaping

πŸ”€ Merging LLMs

We explore:

  • Weight merging techniques
  • Architecture blending across model families
  • Behavior fusion between reasoning + coding models
  • Cross-model capability transfer

🧠 Mixture of Experts (MoE) Model Merging

We develop and experiment with:

  • Sparse expert routing systems
  • MoE model merging strategies
  • Expert specialization for coding, reasoning, and tool use
  • Compute-efficient activation-based intelligence

This allows us to build systems where different β€œparts of intelligence” activate only when needed.


🧠 Flagship Work

πŸ”₯ Genesis AI Code Series

Progressive dataset scaling initiative:

  • Demo β†’ 10K β†’ 50K β†’ 100K
  • Designed for frontier coding agent training

🧬 Core Experimental Systems

  • GODs.Ghost.Codex.XI (recursive architecture lineages)
  • MoE sparse reasoning models
  • Agentic coding frameworks
  • Recursive seed AI prototypes

πŸ€– Model Ecosystem

WithinUsAI develops interconnected model families:

🧠 Reasoning Models

  • Long-context reasoning systems
  • Uncensored experimental variants
  • Structured inference models

πŸ’» Coding Models

  • 0.4B β†’ 8B coding systems
  • MoE-based efficient coders
  • LLaMA, Qwen, Gemma-based derivatives

πŸ€– Agentic Systems

  • Hermes-style structured agents
  • Claude/Gemini-inspired hybrid agents
  • Space-agent reasoning architectures


🌌 Vision

We are working toward a new category of AI: Systems that do not just predict text β€” but recursively construct better versions of themselves.

The future is not one model. It is a network of evolving, specialized intelligence systems working together.


πŸ“š Featured Projects

  • GODs.Ghost.Codex.XI β€” recursive architecture framework
  • PythonGOD-25k β€” high-density coding dataset
  • MoE Efficient Coders β€” sparse expert systems
  • Genesis AI Code Series β€” scalable reasoning dataset pipeline

πŸ™ Acknowledgements & Shout-Outs

WithinUsAI extends our sincere gratitude to the entire open-source community and the major providers who make this research possible. Thank you for letting us experiment with your foundational models, platforms, and datasets!

A special shout-out to:

  • Google (DeepMind ecosystems)
  • OpenAI
  • Meta AI
  • Microsoft
  • IBM
  • NVIDIA
  • xAI
  • Alibaba
  • Mistral AI
  • DeepSeek
  • Anthropic
  • Amazon (AWS AI / Bedrock ecosystem)
  • Hugging Face
  • Big Code
  • Nous Research