HN
Today

Show HN: CodeBurn – Analyze Claude Code token usage by task

CodeBurn is an open-source interactive terminal UI that dissects AI coding assistant token usage by task, revealing significant cost inefficiencies for developers. Its creator discovered 56% of their Claude Code spend went to conversations without tool usage, highlighting a widespread but hidden problem. This tool resonates on HN for offering much-needed transparency into the black box of AI model costs, enabling engineers to optimize their expenditures.

49
Score
13
Comments
#10
Highest Rank
12h
on Front Page
First Seen
Apr 16, 5:00 PM
Last Seen
Apr 17, 4:00 AM
Rank Over Time
101511191718212725232627

The Lowdown

CodeBurn, a new open-source project by AgentSeal, provides an interactive terminal user interface for analyzing token usage and costs across various AI coding assistants. The tool was developed out of the author's personal frustration with spending approximately $1400/week (API-equivalent) on Claude Code with little visibility into where the tokens were actually being consumed, leading to the surprising discovery that over half of their spend was on simple conversational turns rather than actual code generation or editing.

  • Comprehensive Coverage: CodeBurn integrates with popular AI coding tools like Claude Code, Codex (OpenAI), Cursor, OpenCode, Pi, and GitHub Copilot by reading their local session data (JSONL transcripts, SQLite databases).
  • Granular Cost Breakdown: It classifies AI interactions into 13 distinct task categories (e.g., Coding, Debugging, Refactoring, Conversation, Exploration) based on tool usage patterns, providing a detailed understanding of where tokens are spent.
  • Key Finding: The author's initial analysis revealed that approximately 56% of Claude Code usage was for conversation turns without tool execution, while actual coding tasks (edits/writes) accounted for only 21%.
  • Interactive Interface: The tool features an Ink-powered terminal UI with gradient bar charts, responsive panels, and keyboard navigation. It also includes a macOS SwiftBar menu bar integration for quick cost checks.
  • Advanced Features: CodeBurn tracks "one-shot success rates" for coding tasks, indicating how often the AI gets it right on the first try, and supports various currencies with real-time exchange rates.
  • No API Keys Required: It operates by directly parsing local session files, ensuring privacy and ease of use, and uses LiteLLM for accurate pricing data.

CodeBurn empowers developers to gain critical insights into their AI coding assistant expenditures, identify inefficiencies, and make informed decisions to optimize their workflow and reduce costs.

The Gossip

Costly Conversations & Cost Confusion

Commenters were initially taken aback by the author's `$1400/week` spend, some expressing disbelief given their own lower costs on subscription plans. The author clarified that this figure represents the API-equivalent cost of tokens consumed, not out-of-pocket spend on a Max plan, highlighting the often-hidden underlying expenses of heavy AI usage. This discussion underscored the value of CodeBurn in making these opaque costs transparent, regardless of the payment model.

Comparative Code Cost Calculators

Several users pointed out similar existing tools or projects with overlapping functionality, such as 'Claudoscope' and 'Clauderank'. This indicates a shared community need for AI token cost observability and validation for CodeBurn's approach. Suggestions for future features, like evaluating inefficiencies and suggesting cost-saving improvements, also surfaced.

Cursor & CLI Catch-Up

A user quickly identified that CodeBurn's Cursor support did not extend to Cursor Agent (CLI). The author promptly acknowledged this, confirming that support for Cursor's CLI transcripts (`~/.cursor/projects/*/agent-transcripts/`) was next on the roadmap, demonstrating active development and responsiveness to user feedback.