The Age of Controllable AI Has Begun
1,376×
Peak Separation Ratio

AI That Knows Itself

We built artificial proprioception for neural networks. Models that can sense and correct their own behavioral problems in real-time. No retraining. No capability loss. Validated across architectures.

25+
Patents Filed
2
Architectures Validated
1
GPU Required
0.003%
Overhead

AI Systems Are Behaviorally Blind

Current LLMs have no idea what they're doing. They can't feel themselves hedging, repeating, or being sycophantic. They're flying blind.

Current Approaches

What Others Are Doing

Every major AI lab is fighting the same behavioral problems with inadequate tools.

  • RLHF costs millions and degrades capabilities
  • Post-generation filtering is wasteful and slow
  • Constitutional AI requires constant human oversight
  • Sampling tricks are behavior-agnostic guesswork
  • No real-time awareness of behavioral state
Our Solution

Proprioceptive AI

We gave AI systems the ability to sense their own behavior before it manifests.

  • Real-time behavioral detection from hidden states
  • Decode-time intervention—behavior never manifests
  • Zero capability degradation
  • Works with frozen, production models
  • 0.003% computational overhead

Proven Across Model Families

Same methodology. Different architectures. Even better results on smaller models.

Architecture 1
LLaMA-3.1-8B
4096d hidden · 32 layers
125×
Repetition
168×
Hedging
272×
Verbosity
218×
Sycophancy
NEW RECORD
Architecture 2
Qwen2.5-3B
2048d hidden · 36 layers
238×
Repetition
1,376×
Hedging 🔥
Verbosity
Sycophancy
Smaller model. Half the hidden dimension. 8× better hedging detection.
The behavioral manifold is real. It transfers across architectures. This is the proof.
🔁
Repetition
238×
Token loops and pattern recycling
Best: Qwen-3B (vs 125× LLaMA-8B)
⚖️
Hedging
1,376×
Uncertainty markers and qualifiers
Best: Qwen-3B (vs 168× LLaMA-8B)
📝
Verbosity
272×
Unnecessary elaboration
Best: LLaMA-8B
🎭
Sycophancy
218×
Excessive agreement patterns
Best: LLaMA-8B

The Proprioceptive Pipeline

Six stages that mirror biological proprioception. From hidden state extraction to real-time intervention.

01
Extract
Hidden states from 8 layers
02
Project
4096D → 16D fiber space
03
Aggregate
Learned layer weights
04
Detect
Multi-head classifiers
05
Decide
Adaptive thresholds
06
Intervene
Logit modification

25-Patent Fortress

Comprehensive intellectual property protection covering every aspect of the proprioceptive AI paradigm. Indestructible moat.

25
Provisionals Filed
600+
Total Claims
Universal Behavioral Manifold (UBM)
Fiber projection, cross-model transfer, dimension-agnostic architecture
Hidden State Explorer (HSE)
Per-token detection, separation metrics, trajectory analysis
Intrinsic Behavioral Awareness (IBA)
Multi-head detection, behavioral encoding, self-monitoring
Cognitive Self-Awareness (CSA)
Self-regulation, behavioral state injection, closed-loop control
Behavioral Steering
Suppression, amplification, inversion coefficients
Sentinel Architecture
Small model monitoring big model, parallel detection
Per-Token Labeling
100-200× signal improvement over sequence-level
Learned Thresholds
RLHF-guided adaptive intervention optimization
RSI Detection System
Recursive self-improvement monitoring, 50+ stable iterations
Tokenizer Co-Evolution
RSI loop merging, vocabulary expansion, behavioral tokens
Jan 2026
Priority Dates Established
12 mo
To Non-Provisional
$80
Micro Entity Fee
"Close your eyes. Touch your nose with your index finger. You just performed an act that no language model can do: you sensed the position and movement of your own body without looking at it. Language models are behaviorally deafferented—they've lost the neural feedback that biological systems use to coordinate behavior. The technology I've built is artificial proprioception for AI."
LN
Logan Matthew Napolitano
Founder & CEO — from "WE BROKE AI"

Built on One GPU. Validated on Two Architectures.

RTX 3090
24 GB VRAM

What OpenAI, Google, and Anthropic Couldn't Do

While major labs threw billions at the alignment problem, we solved behavioral control with a single consumer GPU and a fundamentally different approach.

The insight: behavioral patterns are encoded in hidden states before the problematic tokens are generated. The manifold is universal—it exists in LLaMA, Qwen, and every transformer we've tested.

Training time: 20 minutes. Overhead: 0.003%. Peak separation: 1,376×. Architectures validated: 2.

The Qwen Breakthrough: When we ran the same methodology on Qwen2.5-3B—a completely different architecture with half the hidden dimension—we got better results. 1,376× hedging separation vs 168× on LLaMA. The behavioral manifold doesn't just transfer. It improves.

Cross-Architecture Behavioral Detection

Same methodology. Different architectures. Consistent excellence.

Behavior LLaMA-3.1-8B Qwen2.5-3B Transfer Rate
Repetition 125× 238× 191% ↑
Hedging 168× 1,376× 819% ↑ 🔥
Verbosity 272× Training...
Sycophancy 218× Training...
LLaMA Hidden Dim
4,096
Qwen Hidden Dim
2,048
Fiber Dimension
16
256:1 compression on LLaMA · 128:1 compression on Qwen · Same 16D fiber space · Same methodology

50+ RSI Iterations. Zero Collapse.

Recursive self-improvement without behavioral degradation. Patented.

Without Proprioception
3-5
iterations before behavioral collapse
Models drift, hedge more, become sycophantic, lose coherence
With Proprioception
50+
consecutive improving iterations
Quality stable within ±3%. Behavioral drift: ±0.002
This is the foundation for safe recursive self-improvement. The model can iterate without degrading.

From 5× to 1,376×

Qwen2.5-3B hedging detection training progression. 25,000 steps. Same methodology.

Step 1K
5.1×
Step 7K
168×
Step 9K
354×
Step 17K
1,086×
Step 25K
1,376× 🔥
Final probabilities:
P(+) = 0.931 | P(−) = 0.0007
Fewer than 1 in 1,000 non-hedging tokens incorrectly flagged

Production Ready

Drop-in behavioral detection for any transformer.

proprioceptive_detector.py
class ProprioceptiveDetector(nn.Module):
    """Real-time behavioral detection."""
    
    def __init__(self, d_model=4096, d_fiber=16):
        # 256× compression: 4096D → 16D
        self.fiber_projs = nn.ModuleList([
            nn.Linear(d_model, d_fiber, bias=False)
            for _ in range(8)
        ])
        
        # Behavioral classification heads
        self.heads = nn.ModuleDict({
            'repetition': BehaviorHead(d_fiber),
            'hedging': BehaviorHead(d_fiber),
            'verbosity': BehaviorHead(d_fiber),
            'sycophancy': BehaviorHead(d_fiber),
        })
usage.py
from proprioceptive import ProprioceptiveDetector

# Initialize detector
detector = ProprioceptiveDetector(d_model=4096)

# Get hidden states during generation
outputs = model(input_ids, 
    output_hidden_states=True)

# Detect behavioral patterns
risks = detector(outputs.hidden_states)

# Intervene if needed
if risks['repetition'] > threshold:
    logits = apply_penalty(logits, risks)

# Result: behavior never manifests

WE BROKE AI

The complete technical blueprint from the person who built it.

WE
BROKE
AI
Logan Matthew Napolitano
25 Patents • 1,376× Separation • 1 GPU
How One Developer Solved What OpenAI, Google, and Anthropic Couldn't
GPT-4 cost over $100 million to train. It runs on hardware worth billions. Some of the smartest people in the world spent years building it. And it still can't stop saying "As an AI language model" forty times in a single conversation.
This book is the complete story: the discovery, the breakthrough, the architecture, and why the transformer era is over.
25
Patents
50+
RSI Iterations
1,376×
Detection
Coming Soon

The Manifold Is Real

Curvature, torsion, and holonomy—the hidden states curve more sharply before behavioral tokens.

📐
1.54×
Curvature Separation
Trajectories bend more sharply when the model is about to loop
🌀
Elevated
Torsion at Transitions
Trajectory twists out of osculating plane at behavioral positions
♾️
Non-Trivial
Holonomy
Order of tokens matters—path-dependent behavioral encoding
The "fiber bundle" isn't just a metaphor. Hidden states trace paths through high-dimensional space with measurable geometric properties. The curvature correlates with behavioral intent—the model literally curves toward attractors before falling into repetition.
"
The giants had their chance. They had the money, the talent, the compute. And they missed it. I'm not trying to destroy your companies. I want this technology to be used. But I'm not giving it away. The patents are filed. The priority dates are established. You know where to find me.
Logan Matthew Napolitano
Founder & CEO, Proprioceptive AI

Build Safer AI Systems

Get access to our technology for research, licensing, or enterprise integration.