Agents need control flow, not more prompts
The emerging consensus in AI agent development is that reliable systems require traditional software engineering principles, not just elaborate prompt engineering. As LLMs are inherently non-deterministic, developers must wrap them in deterministic control flow, validation, and testing harnesses. This pragmatic approach seeks to leverage LLMs' strengths while mitigating their weaknesses, ultimately pushing the field towards more robust and predictable AI applications.
The Lowdown
The article "Agents need control flow, not more prompts" argues that the current paradigm of developing AI agents, heavily reliant on sophisticated prompting, leads to unreliable and non-deterministic outcomes. It advocates for integrating traditional software engineering principles, such as explicit control flow and validation, to build robust and trustworthy AI systems.
- The Problem with Prompts: Relying solely on prompts to dictate agent behavior is akin to programming with
The Gossip
Code Generation Over Agentic Execution
A dominant theme suggests that LLMs are best used to *generate* code that then executes deterministically, rather than being the direct executor of complex tasks themselves. Commenters argue this approach allows LLMs to act as 'smart helpers' translating natural language into structured, verifiable software, significantly improving reliability by shifting away from the LLM's non-deterministic direct action.
Harnessing Hallucinations with Harnesses
The inherent non-determinism and potential for 'hallucination' in LLMs are central to the discussion. While some suggest accepting a failure rate and building compensation controls, a strong majority advocate for wrapping LLM interactions within robust engineering 'harnesses.' These include control flow, validation layers, quality gates, and agentic loops, designed to manage, test, and verify LLM outputs, thereby imposing determinism onto a probabilistic system.
Programming's Perennial Principles
Many commenters observe with a mix of humor and exasperation that the proposed solutions—implementing control flow, validation, and deterministic wrappers around LLMs—sound remarkably similar to fundamental software engineering practices established long ago. This theme highlights a perceived 'reinvention of the wheel' as AI agent development converges back towards traditional programming paradigms, with LLMs being a new, powerful component within familiar architectural patterns.