Skip to content
Back to Blog
·9 min read

The Geometry of Everything: A Research Synthesis from Riemann to Transformers

ResearchAI/MLGeometryNeurosciencePhysics

Over the past several months I've been working on a research collection project called The Geometry of Everything: From Riemann to Einstein to Consciousness. It's now live as an interactive site at the-geometry-of-everything.vercel.app, and this post is a short tour of what it is, why I built it, and what I think is interesting about the result.

The One-Sentence Version

Bernhard Riemann's 1854 habilitation lecture introduced a radical idea: geometry is not a fixed background — it is shaped by what lives inside it. The synthesis argues that this single insight reappears, with the same mathematical structure, in three places that are usually treated as unrelated:

  1. General relativity, where mass-energy curves spacetime and matter follows geodesics.
  2. Transformer neural networks, where token content shapes attention geometry and "thoughts" trace paths through a learned manifold.
  3. The brain, where neural population activity organizes onto low-dimensional curved manifolds whose geometry tracks cognition and consciousness.

The recurring slogan: content curves the space it lives in, and the curved space guides the content's motion.

Why Build This as a Research Collection

I didn't set out to write a paper — I set out to map a question. There's a wave of recent results (mostly 2024–2026) that individually look like clever applications of differential geometry to ML or neuroscience. Looked at together, they start to feel less like analogies and more like the same machinery showing up in different costumes.

The project is structured as a multi-agent research synthesis: roughly 137 sections across 32 parts, with over 250 referenced papers. The agents handled literature retrieval, cross-domain matching, and draft synthesis; I handled curation, the central thesis, and the editorial spine. It's the kind of project that wouldn't have been tractable solo even two years ago.

The Mapping That Held Up

The clearest result is a direct cross-domain dictionary. The same mathematical objects show up in all three fields:

ConceptPhysicsMachine LearningNeuroscience
Underlying space4D spacetime manifoldData / latent manifoldsNeural state-space manifolds
Metric tensorGravitational potentialFisher information matrixNeural metric
CurvatureGravityLoss-landscape topologyPrediction error
TrajectoriesGeodesics / free fallNatural-gradient flowThought streams

This isn't a metaphor. The Fisher information metric used in natural-gradient descent is, formally, the same kind of object as the metric tensor in general relativity. Geodesic sharpness — a recent measure of generalization that respects transformer symmetries — recovers signal that flat sharpness measures lose. RiemannFormer (2025) shows that rotary position embeddings (RoPE) are a special case of Riemannian attention in flat space.

The Most Interesting Recent Findings

Three results from 2024–2026 surprised me enough that they're worth calling out individually:

1. Neural manifolds are real, not just a useful fiction

MARBLE (Nature Methods, 2025) demonstrated that biological neural population data lives on genuine curved manifolds with measurable Riemannian structure — not linear subspaces with curvature added for flavor. The dimensionality of these manifolds correlates with level of consciousness.

2. Learning is governed by Ricci flow

The same equation Perelman used to prove the Poincaré conjecture turns out to describe how feature geometry evolves during neural network training. Discrete Ricci flow on the loss landscape isn't an analogy here; it's the actual dynamics.

3. Neural networks rediscover spacetime

Trained only on boundary data, certain neural networks autonomously recover BTZ black-hole metrics — without being told anything about general relativity. Spacetime, in some narrow but real sense, is something a learning system can find on its own.

There's also growing evidence that grokking is a genuine phase transition with measurable critical exponents and Arrhenius-style activation barriers, and that scaling laws look like renormalization-group flows. The thermodynamic vocabulary keeps fitting too well to be coincidence.

Where the Synthesis Goes Out on a Limb

I want to be honest about what's solid and what isn't. The geometric story in physics is over a century old. The geometric story in ML is recent but increasingly empirical. The geometric story in consciousness is the most speculative part — even though Integrated Information Theory 4.0 explicitly treats experience as geometric shape, and even though the brain's manifold dynamics are now measurable, the leap from "consciousness has geometric correlates" to "consciousness is geometric" is still a leap.

The project doesn't try to hide this. The deepest synthesis sections (Parts IX–XV) are presented as a mathematically grounded framework for asking better questions, not as a finished theory.

Who Might Find This Useful

  • ML practitioners looking for interpretability tools that take the geometry of attention, generalization, and scaling seriously.
  • Neuroscientists who want a shared mathematical vocabulary for analyzing population activity and comparing it to artificial systems.
  • Theoretical physicists curious about the unexpected bridge between machine learning and emergent spacetime.
  • Anyone who has wondered why so many fields keep reinventing the same mathematical primitives under different names.

Read the Project

The full synthesis — all 32 parts, the cross-domain mapping, the references, and the deeper sections on consciousness and emergence — lives at the-geometry-of-everything.vercel.app. It's free, it's interactive, and it's designed to be read non-linearly — start wherever the table of contents pulls you.

If you read it and find a connection I missed, a paper that should be in there, or a place where the mapping breaks down, I'd love to hear from you. This is the kind of project that gets sharper with feedback.

Need AI agents for your business?

At The Brainy Guys, we build and deploy production AI agents on dedicated infrastructure.

Learn More

Get AI & engineering insights

Articles on AI agents, distributed systems, and software architecture. No spam, unsubscribe anytime.