HN
Today

Claude Token Counter, now with model comparisons

Simon Willison has upgraded his popular Claude Token Counter to allow for direct model comparisons, revealing key differences in tokenization. This update is particularly relevant for developers as it sheds light on the often-hidden cost implications of new models like Claude Opus 4.7. Understanding these token count variations is crucial for managing API expenses and optimizing prompts in real-world AI applications.

13
Score
1
Comments
#4
Highest Rank
11h
on Front Page
First Seen
Apr 20, 3:00 AM
Last Seen
Apr 20, 1:00 PM
Rank Over Time
11964678681012

The Lowdown

Simon Willison has enhanced his Claude Token Counter tool, now enabling users to compare token counts across various Claude models. This update is especially pertinent following the release of Claude Opus 4.7, which introduced significant changes to its tokenizer and image handling, directly impacting the cost-effectiveness of API calls.

  • The updated tool supports comparisons between Claude Opus 4.7 and 4.6, Sonnet 4.6, and Haiku 4.5, highlighting how different models process text.
  • Claude Opus 4.7 utilizes a new tokenizer that, as Anthropic noted, can result in 1.0-1.35x more tokens for the same input compared to previous versions.
  • Practical tests with the Opus 4.7 system prompt showed a 1.46x increase in token count when using Opus 4.7's tokenizer compared to Opus 4.6.
  • Despite consistent per-token pricing, this inflation means Opus 4.7 text inputs could be approximately 40% more expensive in practice.
  • Opus 4.7 also boasts improved vision capabilities, supporting images up to 2,576 pixels on the long edge, a substantial increase from prior models.
  • Counting tokens for a high-resolution image revealed a 3.01x increase in token count for Opus 4.7 versus Opus 4.6; however, this significant jump is directly attributed to Opus 4.7's enhanced resolution handling, as smaller images show negligible differences.

This tool provides invaluable transparency for developers navigating the complexities and costs associated with integrating advanced AI models into their workflows, particularly concerning token usage and the real financial impact of model updates.