This repo is

A combination of notebooks, scribbles, scripts, and potentially invalid assessments accumulated throughout my 5d research. If you go digging you will find the trail for most things in my research here, or in the geolip, geofractal, or crystal_lattice repos. However, you may not like what you have to dig through to find it.

Organization has an indefinite cost of time, Clawd will solve that for us all soon anyway.

The most recent geovocab2 repo installs with geofractal.

This includes the geofractal router and the full analysis toolkit for high-complexity routed solutions. The documentation allows claude to build rapidly and somehow forget everything in minutes.

The structure houses a full router/tower/component architecture that provides rapid prototype potential with guaranteed high-speed systems without hidden underlying mechanism compilation faults that many other architectures may have.

Houses a full stream, training, and compilation system for utility.

Provides native built-in ensemble compilation optimizations for branching block optimizations. Seems to not matter on blackwell architectures, so this speed boost may be obsolete.

Probably so complex you will get a headache trying to get Claude to fix a bug that Claude will not know where to look and find, when properly utilized provides massive speed boosts to research productivity. At least in my case. Won't be the same experience for everyone.

https://github.com/AbstractEyes/geofractal

Original geovocab repo

still useful utilities persist here in factory, symbolic caption synthesizers, and formula basin. I do regret intermingling everything through prototypes, AI are terrible at handling that.

https://github.com/AbstractEyes/lattice_vocabulary

pytorch wide compilation system

The benchmarks are real. If you actually use it, it will improve your ensemble speeds by exceptional amounts without requiring compilation. If you compile it will provide additional speed.

Built meant to be an organ for systems, but I ran into some major limitations with the pytorch compilation system that would have required a large series of experiments to even get valid numbers for. So I decided to put it on hold and wait for more pytorch optimizations.

This is meant to be an organ for geofractal's multi-tower multi-router ensemble communication system.

https://github.com/AbstractEyes/pytorch-parallel-compiler

Potentially a dead repo, but maybe not.

It's redundant now that geolip is essentially a branching series of losses. geolip will be folded into the geofractal router when the time comes, but not yet.

https://github.com/AbstractEyes/glip-autoencoder

SD15 geolip patchwork experiments and FiLM lora

Highly effective at... well, fixing garbage. Can be tuned to handle artifacts fairly easily, but the niche is tight for a flow-matching sd15 solution. So it is a research artifact rather than a production-grade system. The system contains a valid ksimplex channeling spectrum that provides rapid convergence to deep-complexity noise trajectory prediction.

https://github.com/AbstractEyes/sd15-flow-trainer

Conclusions from long-standing 5d research

These conclusions below are forged in steel. You can disprove them at any point and I will run your experiments, I encourage it.

Teach me, I will add and expand my knowledge to everything required to know why something works, until there is no more reasoning left in the logic and only order remains.

3/15/2026 Concussive Force and initial push

This is blunt force trauma to my research and many adjacent researchers work, take with a grain of salt and prove me wrong.

After, nearly 500 experiments on 5d shapes I have concluded one powerful set of conclusions.

MANY of them cull entire projects of mine because of the implications applied by the mechanisms.

Some projects have instead... expanded, such as the geometric vocabulary patchwork. It's current form is the anchor bank alignment, and it will evolve over time.

Conclusion 1

Using correctly formatted geometry not only speeds up convergence, but it also reduces computation overhead overall.

Outcome of experiments

After thousands of experiments based on geometric induction training overall added to the recent barrage, I can definitely say a few things about this topic.

Though a few is an understatement that would not justify the actual effort nor the attention to detail required to make sure those few actually held their form and shape.

Full spectral utility is imperfect still

This geolip structure STILL requires for FAIR optimization without exception:

  • Requires a spectral optimizer.
    • autograd forward/backward is included in the system to emulate procrustes internal alignment, requires customization for task still.
    • Not an architectural replacement, but a bandaid.
  • Requires a spatial aware geometric regularization, no transformers worked, no linears worked, no channel spectrums worked.
    • CV loss is packaged with the system and it comes with cayley-menger validation.
  • Requires a compacting loss series to align to task, but not a special set of losses. You can use your losses.
    • The losses are your task, in the geolip architecture the task is to analyze collapsing geometry.

Orthogonal training is very very fast

If you prepare the orthogonal training correctly, you're already done training before you begin.

  • Incorrect alignment will cause drift.
  • Drift causes misalignment down the hypersphere manifold.
  • Misalignment causes monotone and quiet differentiation
  • Quiet differentiation is essentially treated as noise and optimized out, requiring autograd curation.

Conclusion 2

Using correctly formatted losses without the unique backprop can be used on traditional linear and transformer structures, with or without attention.

Multiple experiments showcase this is nearly instant

  • The overfitting is instant, you can see the outcome perfectly learn and fail.
  • The compression is exceptionally effective, at making sure your system collapses downward.
  • Weight decay slowly rounds edges you need rigid and preserves rigidy you need round and the CV spikes can be seen instantly.
  • Task alignment is a downstream process to most effectively be explicitly learned, not implicitly learned.

If the resonance aligns, the data retains and provides utility within 3 epochs at 100% R1 recall.

If the resonance does not align, the data requires additional measures. No amount of regularization will teach misaligned data in a way that can be perfectly recalled.

Conclusion 3

Rigid first is an incorrect paradigm that we all apply to many AI shapes.

Training rigidity first smooth later causes the smoothing effect to rupture

  • Ruptured rigidity causes cascade unlearning, internal functional collapse, and differential bias that cannot be unlearned.
  • Alignment faults when attempting to train with transformers, as the transformers slowly rupture the internals of rigidity to formations deviant of geometric shape.
  • The CV rapidly shoots up and falls during training, a symptom of internal functional instability and a core fault of internalized failures.

Smooth first rigid later

  • Prepare the smooth manifold, train the alignment, train the CV, anchor it, freeze it. Near perfect convergence if it is done correctly.
  • Smooth prepared first allows rigid imposition to generate later rather than sooner.
  • Linear algebra and geometric alignment should be predominantly LINEAR and not non-euclidean for a hypersphere.
  • This is a recipe still, not a blueprint. There is no quick fix yet.
  • The answer is in the process, the task, and the expected outcome.

Conclusion 4

Pentachoron as a shape is a useful canary for failure detection, and bad at almost everything specific overall.

FP64 is required for proper 5d

  • Without fp64 the 5d collapses into noise before mandelbrot hits a point of utility, julia faults before reaching cohesion, and systems akin to fractal detectors cannot learn to differentiate structural boundaries within a reasonable amount of time; 800+ ms rather than 0.003 ns for cosine similarity.
  • There is no quick fractal fix to repair faulty pentachoron math. The hardware simply does not allow capturing this as useful data, it is rounded to noise and then sliced off at losses.
  • Useful detection of this is based on the cayley-menger sampling, which is meant to capture a fraction of the whole and form cohesive shapes. None of these are ever allowed to be negative. That's the most robust utility and it provides direct regularization validity for faulty differentiation accumulation.

FP128 is required for proper 6d

  • Experiments show with cantor stairs, 6d is noise without direct integer accumulation prcoesses.
  • I have no fix for this other than following rigid detection and smooth manifold attenuation.

KSimplex is a perfect manifold curation tool for iteration

  • This allows direct sequential utilization of pattern detection and it's insanely powerful for multi-token prediction, at a computational price.
  • By looking ahead your model learns how to predict further, causing strain and stress on the internal structure - which taxes dimensional space in unknown and unmeasured ways.
  • This defeats rope in long-sequence selection, but not in comparison to huge sequence Ulysses attention sequence batching. Different beasts. Ulysses has a legitimate hardware solution, cantor fractal attention is a stopgap for OOM with O(N+1) sequence attention with topological constraints.

Conclusion 5

Topology should be represented as a superposition of two states; rigid and smooth, rather than one or the other.

  • The experiments all lean towards hypersphere attenuation, rigid alignment within those constraints.
  • The outcomes show positive growth and near instant learning with R1 validation recall if aligned correctly.
  • Topology is an internal functional differentiation mechanism that collapses to noise rapidly if trained incorrectly.
  • Superposition alignment on topology is effectively anchorable and reproducable.
  • The complexity of smooth far outweighs the usefullness of multispectral analysis, it simply means more for less cost when sampled as a smooth structure with rigid structural behavior.
  • The simplicity and speed of rigid sampling far outweighs the benefits of immediate overanalysis of diffusion and prediction.
  • Fusing the two gives the smooth surface rigid sampling process, essentially enabling the progressive alignment mechanisms to function within tolerance.

Continuum topology is powerful, but highly corrosive.

  • The outcomes of utilizing overlapping or singular infinite composite topology defined through bias is both the most powerful tool in my arsenal, and one of the most difficult and useless tools.
  • This is a 5th dimensional solution that does not have a tuning fork yet, so the outcomes have shown high-yield utility when they align, low yield fault and failure more often than alignment.
  • The topological mapping system requires additional geometric boundaries, which we are currently researching.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including AbstractPhil/geolip-hypersphere-experiments