ATOM
Verified reasoning through multi-model consensus
The bottleneck in AI-assisted reasoning isn't knowledge retrieval but navigation: knowing which claims to trust, when to stop decomposing, and how to surface genuine disagreement between models.
Atomic decomposition
Questions are recursively split into the smallest independently verifiable claims, creating a reasoning tree where each leaf can be checked in isolation.
Multi-model consensus
Three diverse 1B-parameter models outperform a single 8B model. Disagreement between models is a feature, not a bug, surfacing genuine ambiguity in the evidence.
77% one-hop-short finding
Analysis of 14,502 classified predictions revealed that 77% of reasoning failures occur because the chain stops one decomposition step too early.
FRSM substrate direction
A learned stopping heuristic that predicts when further decomposition will yield diminishing returns, moving toward substrate-independent reasoning evaluation.
Built for local-first inference with MLX on Apple Silicon, with OpenRouter fallback for model diversity.
Active development. Core decomposition and consensus pipeline complete. Currently refining the FRSM stopping heuristic and building the evaluation harness.