HN
Today

Don't Wait for Claude

This article challenges the common developer practice of waiting for slow AI agents like Claude, proposing instead to parallelize tasks and manage context with external state. It argues that the bottleneck is not AI throughput, but human idleness during AI processing. The author introduces a tool, 'jc', designed to orchestrate multiple AI sessions, sparking debate on the efficacy of multitasking in an AI-driven workflow.

11
Score
14
Comments
#2
Highest Rank
1h
on Front Page
First Seen
Mar 27, 6:00 PM
Last Seen
Mar 27, 6:00 PM

The Lowdown

The author, Jay McCarthy, highlights a significant bottleneck in AI-assisted development: the developer waiting idly while AI agents like Claude process requests, which can take several minutes. While prompt optimization helps, the core problem is human downtime. The proposed solution is to run multiple AI sessions in parallel, switching between them.

  • The Bottleneck: Developers typically wait seven minutes for Claude, review, then send corrections, leading to only about four cycles of work per hour.
  • The Switch: The article advocates working on other tasks while Claude runs, using notifications to signal completion. However, managing the mental state and context across multiple sessions proves challenging for humans.
  • Externalizing State: The core strategy is to document current context, corrections, and next steps in a persistent location rather than relying on memory. This allows seamless switching and returning to a task.
  • DIY Limitations: The author describes attempts to implement this workflow manually using tools like Zed, which ultimately failed due to friction in note-taking, lack of clear notifications, and difficulty navigating multiple sessions.
  • The 'jc' Tool: The article introduces 'jc', a native macOS app built to specifically manage multiple Claude Code sessions. It integrates diff views, a TODO editor, and smart navigation to cycle through prioritized tasks, aiming to boost productivity from four to twelve cycles per hour.

The essence of the argument is that productivity gains come not from faster AI, but from optimizing the human workflow around AI, treating AI agents as asynchronous coworkers rather than single-threaded tools.

The Gossip

Context Switching Catastrophe

A prevailing sentiment among commenters is strong disagreement with the article's premise of parallelizing AI tasks, citing the well-documented high cost of human context switching. Many argue that the mental overhead required to juggle multiple complex tasks, even with an organizational tool, would negate any perceived time savings, leading to inefficiency and frustration.

Prompting Perfection vs. Parallel Processing

Several commenters suggest that if an AI agent like Claude consistently takes seven minutes for a task, the issue lies not with the waiting period, but with the initial prompt or the task's scope. They advocate for refining prompts, breaking down problems into smaller, quicker steps, or focusing on robust validation strategies rather than simply parallelizing long-running, ill-defined AI tasks.

Sarcasm and AI Authorship

The article's provocative closing line, "I don’t accept human-authored code," spurred discussion about its sincerity. Commenters questioned whether it was a genuine stance or a sarcastic jab, and some humorously speculated if the article itself was generated by an LLM, given its subject matter and tone.

Demand for Demonstrations

Many readers found the description of the `jc` tool and the proposed workflow abstract, expressing difficulty in visualizing its practical application. They requested concrete examples, video tutorials, or walk-throughs to better understand how the system operates and its actual benefits in a real-world coding scenario.