Logan Matthew Napolitano

Research Publications

Research papers by Logan Matthew Napolitano on AI safety, transformer architectures, neural network monitoring, behavioral control, and alignment methods. Founder of Proprioceptive AI.

Zenodo Profile Hugging Face LinkedIn

Publications

  1. Predictive Behavioral Detection in Frozen Language Model Hidden States: Evidence for Pre-Surface Behavioral Encoding
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18701152
  2. Proprioceptive AI: Self-Compounding Behavioral Probes for Autonomous Model Improvement and Probe-Guided Compression
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18523713
  3. Consistency Is All You Need
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18489530
  4. Unified Behavioral Modulation in Large Language Models: Cross-Architecture Validation of Geometric Behavioral Subspaces
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18471775
  5. Multi-Head Behavioral Detection
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18459613
  6. Decode-Time Behavioral Control for Language Models via Per-Token Risk Prediction
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18367221
  7. ARC Complete Code Reference
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18362988
  8. Zenodo Record 18361799
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18361799
  9. Controlled Language Models: Inference-Time Control, Tokenization Engineering, and Reversible Optimization
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18344021
  10. Zenodo Record 18342792
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18342792
  11. Zenodo Record 18331052
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18331052
  12. Zenodo Record 18321616
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18321616
  13. Zenodo Record 18321445
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18321445
  14. Zenodo Record 18318475
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18318475
  15. Zenodo Record 18311070
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18311070
  16. Zenodo Record 18302997
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18302997
  17. Zenodo Record 18293515
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18293515
  18. Zenodo Record 18284613
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18284613
  19. Zenodo Record 18283527
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18283527
  20. Zenodo Record 18263769
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18263769
  21. Zenodo Record 18261999
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18261999
  22. A Symbolic Control Runtime for Consistency-Aware Reasoning with Transformer Backends
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18254824
  23. Reducing Self-Referential Gaming in Consistency-Aware Transformers by Grounding Control Signals in External Task Outcomes
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18249601
  24. From Explicit Holonomy to Latent Control Fields
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18249149
  25. The Holonomy Transformer: A Geometrically-Native Neural Architecture for Consistent Reasoning
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18247940
  26. The Holonomy Transformer: Original Formulation
    Logan Matthew Napolitano (2026)
    Preprint · DOI: 10.5281/zenodo.18247585

View all publications on Zenodo →

On the Philosophy of the Founder

I did not come to artificial intelligence through computer science. I came through history, through mathematics, and through the persistent feeling that the systems we build reflect the civilizations that build them — for better and for worse.

Feedback Loops and Empires

The great empires understood something about feedback that modern technologists are only now rediscovering. The Romans built aqueducts not merely as infrastructure but as self-regulating systems — gravity-fed, slope-calibrated, with overflow channels that corrected for excess without human intervention. The Abbasid Caliphate preserved and extended Greek mathematics not out of nostalgia but because they recognized that algebra was a language for describing systems that govern themselves. The Song Dynasty invented movable type and paper currency — technologies of propagation and abstraction — and in doing so compressed centuries of economic feedback into decades.

Norbert Wiener saw this thread clearly when he named his field cybernetics, from the Greek kybernetes — the steersman. The insight was never about control in the authoritarian sense. It was about systems that sense their own drift and correct course. A steersman does not fight the sea. He reads it.

That is the idea at the center of proprioceptive AI. Not a model that obeys commands, but a model that perceives its own behavioral state the way your hand knows where it is in the dark.

On Overcoming

Zarathustra descends from his mountain not with commandments but with a challenge: become what you are. The overman is not a figure of domination — he is a figure of self-overcoming. He looks at the abyss of his own limitations and does not flinch. He builds the bridge across it.

There is something deeply Zarathustrian about the alignment problem. We have built minds that exceed our own in narrow domains, and instead of retreating into fear or denial, the task before us is to rise to meet them — to build systems of understanding that match the systems we have unleashed. The rope is stretched over the abyss. The question is whether we walk it with eyes open.

I believe in the overcoming. Not blindly, not with the reckless optimism of those who assume progress is automatic, but with the earned confidence of someone who has studied how civilizations succeed and how they fail. They fail when they stop building feedback loops. They fail when they mistake power for wisdom. They succeed when they create institutions and technologies that make correction not just possible but inevitable.

The Beauty of the Proof

I love mathematics for the same reason I love history — both are honest. A proof does not care about your credentials or your funding. It holds or it doesn't. Euler didn't solve the Basel problem because he had institutional backing. He solved it because the series 1 + 1/4 + 1/9 + 1/16 + ... converges to π²/6 whether anyone believes it or not. Al-Khwarizmi didn't formalize algebra to win grants. He did it because the structure was there, waiting to be named.

Our work on behavioral probes is mathematical before it is technological. The separation ratio is not a marketing number. It is a measurement — the geometric distance between behavioral subspaces in high-dimensional space. When we say 1,376×, we mean the proof holds across architectures, across scales, across tasks. That is not an engineering claim. It is a mathematical one.

There is a reason the Persians, the Arabs, the Indians, and the Greeks all converged on similar mathematical structures across centuries and continents. The truth beneath the notation doesn't move. It waits for you to find it.

On Equity and Who Gets to Be Safe

None of this matters if the benefits concentrate in the hands of a few.

History is unambiguous on this point. Every major technology — writing, printing, electricity, computation — has followed the same arc: invented by the few, hoarded by the powerful, eventually democratized by the stubborn and the principled. The question is never whether a technology will spread. The question is how much damage is done in the interim, during the years when access is a privilege rather than a right.

AI safety cannot be a luxury good. If behavioral monitoring only protects the models deployed by well-funded labs in wealthy nations, then we have not solved the alignment problem — we have merely privatized it. The village in Senegal running an open-source language model for agricultural guidance deserves the same behavioral guarantees as a Fortune 500 company running GPT behind a firewall. The student in Dhaka building her first chatbot deserves probes that work, not a watered-down version because her compute budget is small.

This is not charity. This is engineering responsibility. We do not build bridges that are safe only for the rich to cross. We do not write building codes that apply only in certain zip codes. The cybernetic principle — that systems must sense and correct their own drift — is universal, or it is nothing.

I founded Proprioceptive AI with the conviction that architecture-independent means exactly that: independent of who can afford the architecture. The patents we file are a moat, yes — but a moat that protects the capacity to keep this work open where it needs to be open, and equitable where the market would prefer it be exclusive.

On the Future

I am optimistic, but my optimism is Zarathustrian — it is earned through confrontation, not comfort. The next decade will produce systems of extraordinary capability. Some will be dangerous. Many will be misunderstood. A few will be genuinely beautiful.

The Mongol Empire built the largest contiguous land empire in history not through brute force alone but through the yam — a postal relay system that moved information faster than any competing civilization could process it. Whoever controls the speed and fidelity of information controls the era. Today the information is not carried by horses. It is generated by models. And the question of this generation is not whether the models will be powerful — they will — but whether we will build the yam that keeps them honest.

That is the work. That is what proprioception means. Not control from above, but awareness from within. A model that knows when it is drifting. A system that corrects before it fails. A technology that belongs to everyone who needs it.

The mountain is high. We are climbing.

WE BROKE AI: The Discovery That Could Save a Billion-Dollar Industry
Logan Matthew Napolitano (2024)
Book · Amazon Kindle