Investigating the thermodynamic boundaries of Artificial General Intelligence and preventing Model Collapse.
[Publications] β’ [Models] β’ [Technical Writing]
I am a researcher bridging the gap between Information Geometry and Deep Learning. My work focuses on rigorous mathematical proofs for AI stability in recursive training loops.
Currently, I am investigating Model Autophagy Disorder (MAD)βthe topological collapse of AI models trained on synthetic dataβand proposing Salmon Regularization as a counter-entropic solution.
- π Current Research: The Ainex Singularity (Geometric Proof of Dimensional Collapse).
- ποΈ Affiliation: King Abdulaziz & His Companions Foundation for Giftedness and Creativity (Mawhiba).
- β‘ Focus: Moving beyond "Scaling Laws" to "Geometric Grounding."
Submitted to Journal of Artificial Intelligence Research (JAIR) & UAI 2026.
We formally prove that static neural topologies impose a "Rigidity Penalty," guaranteeing that recursive loops contract the semantic convex hull to a zero-information singularity.
| Status | Venue | Artifacts |
|---|---|---|
| Under Review | JAIR | [Manuscript Submitted] |
| Under Review | UAI 2026 | [OpenReview] |
| Preprint | Zenodo | |
| Code | GitHub | Ainex-Limit-Experiment |
I maintain a connected network of resources to document the research journey and share tools:
- Hugging Face: Hosting the
Ainex-GPT2checkpoints and collapse visualizations. - GitHub: Open-source implementation of Salmon Regularization.
- Dev.to: Breakdowns of complex entropy concepts for engineers.
- CoderLegions: Developer-focused tutorials and discussions.
- HackerNews: Discussions on AI safety and recursive training risks.
- Core: PyTorch, NumPy, SciPy, LaTeX.
- Analysis: Information Thermodynamics, Topology, Differential Geometry.
- Workflow: Git, Docker, VS Code, Linux.
