Vibe coding and agentic engineering are getting closer than I'd like
Simon Willison observes a disquieting convergence of 'vibe coding' and 'agentic engineering,' as AI agents become reliable enough to reduce human code review. This shift prompts a re-evaluation of software quality, the development lifecycle, and the role of human expertise. The piece resonates deeply on HN by exploring the profound implications for their craft, igniting debate on trust, accountability, and career evolution in an AI-accelerated world.
The Lowdown
Simon Willison shares his evolving perspective on AI coding tools, specifically the blurring lines between 'vibe coding' (fast, unreviewed AI code for personal projects) and 'agentic engineering' (professional, quality-focused AI-assisted development). This realization, initially sparked during a podcast, highlights a fundamental shift in how software is created and evaluated.
- Vibe Coding vs. Agentic Engineering: Willison initially drew a clear distinction: vibe coding was for low-stakes personal projects, while agentic engineering involved experienced professionals leveraging AI to build higher-quality production systems, maintaining oversight. He now finds these boundaries increasingly permeable.
- The Blurring of Lines: As AI agents like Claude Code grow more reliable for routine tasks (e.g., JSON API endpoints), Willison admits to reviewing less code, experiencing a 'normalization of deviance.' This feels unsettling, as AI agents lack accountability or professional reputation, unlike human teams.
- Rethinking Software Evaluation: Traditional markers of software quality, such as extensive commits, detailed READMEs, and comprehensive tests, can now be rapidly simulated by AI. This shifts the true indicator of quality to whether a piece of software has actually been used and validated in real-world scenarios over time.
- Shifting Bottlenecks in SDLC: AI dramatically increases code output (e.g., 200 lines to 2,000 lines a day), challenging existing software development lifecycle processes designed for slower, human-paced coding. This impacts everything from design (making riskier iterations feasible) to integration.
- Career Resilience: Despite these monumental changes, Willison remains optimistic about his career. He sees AI as an amplifier for experienced engineers, emphasizing that software development remains 'ferociously difficult.' He believes human expertise is still crucial for navigating complexity and driving innovation.
- The 'Plumber' Analogy: He likens the situation to hiring a plumber versus DIY plumbing; for critical systems, people prefer professional, proven solutions. This extends to enterprise software, where adoption hinges on a solution's track record, not just its AI-driven speed of creation.
In essence, Willison's reflections underscore a future where the definition of 'coding' evolves, demanding new strategies for validation, accountability, and the judicious integration of AI tools, while human judgment and proven usage become paramount in a sea of AI-generated output.
The Gossip
Quality Quandary: Who's Responsible for AI-Generated Code?
The community grapples with the quality and accountability of AI-generated code. While some argue that AI can produce robust, well-tested code if guided correctly, others fear that agents lack the 'pride' or 'ego' of human engineers, leading to subtle, hard-to-spot errors, tech debt, and security vulnerabilities. Many believe the ultimate responsibility for production code still rests with human engineers, necessitating careful review, even for 'boilerplate' tasks, contrasting with the temptation to blindly trust efficient AI output. There's a consensus that 'truthy' but flawed code is harder to review than obviously broken code.
SDLC Shifts: New Bottlenecks and Roles
The rapid acceleration of code generation by AI is forcing a re-evaluation of the entire software development lifecycle (SDLC). Traditional metrics like Lines of Code (LOC) for output are heavily debated, but the increased volume undeniably strains review processes. Commenters explore how AI reshapes roles, from engineers focusing more on architecture and comprehensive validation suites rather than raw code, to concerns about losing deep problem-solving skills. Some predict all coding will become 'vibe coding,' with engineering shifting to designing validation mechanisms. Others highlight that AI primarily exposes and accelerates undisciplined engineering, rather than creating it.
Existential Engineering: The Future of Human Programmers
Many participants express profound concerns about the long-term implications of AI for programming careers and the overall software ecosystem. Fears range from a future drowning in unmaintainable 'AI slop' that humans can't comprehend, to the potential devaluation of experienced engineers as AI handles tasks. The discussion challenges the 'plumber analogy,' suggesting economic pressures might push companies towards cheaper AI solutions, potentially leading to widespread pay cuts or a shift where only highly specialized 'AI supervisors' remain. Others remain optimistic, viewing AI as another evolutionary step, akin to past technological shifts, which will ultimately lead to new, more complex problems requiring human ingenuity.