Over the past eight months, I have been working extensively with looped transformer models for ARC-AGI. The goal, broadly, is to bridge the gap between machine and human-level reasoning: to build systems that can generalize the way we do. Spending this much time thinking about that gap will inevitably make you start asking what the gap actually is. How is human reasoning and expression fundamentally different from machines?
That question led me to Neil Lawrence's Living Together: Mind and Machine Intelligence, and I think his framing is one of the most clarifying I've come across.
The embodiment gap
Lawrence has a useful framing: instead of asking how intelligent machines are, ask about their embodiment factor: the ratio between how much they can compute and how much they can communicate.
For humans, this ratio is enormous. We process something on the order of an exaflop internally, but can only communicate at around 100 bits per second.† We are bottlenecked. Everything we think stays mostly inside us; what we share is a tiny, carefully compressed signal. Lawrence argues this constraint is actually foundational to how human intelligence works. We evolved to infer, to read intent, to predict what others will do, precisely because we can't just broadcast our full internal state to one another.
Machines have the opposite problem. A large model can push gigabits of data per second, but its compute relative to that bandwidth is comparatively small. It doesn't have an interior that dwarfs its exterior. In some sense, it has no inside.
The more unsettling point Lawrence makes: the real risk isn't a conscious machine that decides to harm us. It's a non-conscious system, one with no moral reasoning, no sense of stakes, operating at scale, quietly shaping behavior and preferences without anyone deliberating about whether it should. The danger isn't the machine that thinks. It's the one that doesn't.