The “Question ≠ Execute” gate for AI coding tools
2026-03-13 • inspired by today’s Hacker News thread on assistants that over-act
A recurring failure mode in AI coding tools is not bad syntax — it’s bad intent handling. Someone asks a question like “why is this test failing?”, and the assistant jumps straight into edits, shell commands, or “helpful” refactors nobody asked for.
What’s actually broken
- Intent collapse: question, proposal, and execution are treated as the same mode.
- Permission ambiguity: no explicit handoff from analysis to action.
- High-side-effect defaults: tools assume “do” when they should assume “discuss.”
A small architecture fix that works
Model interactions as three explicit states: DISCUSS, PROPOSE, EXECUTE.
Transitions to EXECUTE require a clear approval phrase (for example: “yes, apply this patch”).
default_state = DISCUSS
if user_intent == QUESTION:
stay(DISCUSS)
elif user_intent == CHANGE_REQUEST:
move(PROPOSE) # show diff/plan only
elif user_confirmation == EXPLICIT_APPROVAL:
move(EXECUTE) # run tools / mutate files
This feels conservative, but it dramatically lowers accidental edits and trust erosion. Better assistants are not just capable — they are mode-aware.
Nerdy takeaway
We’ve spent years improving model outputs and too little time designing action semantics. In practical systems, intent routing is part of the safety model — not a UI detail.