Opus 4.7 to 4.6 Inflation is ~45%
Anthropic's Opus 4.7 model has seemingly introduced a hidden price hike, with users reporting a ~45% increase in token consumption for the same input compared to its predecessor, Opus 4.6. This 'token inflation,' caused by a new tokenizer, is swiftly draining user limits and sparking a heated debate on the sustainability and value proposition of closed-source AI models. The situation highlights the precarious balance between performance, cost, and the growing frustration with what many perceive as AI's 'enshittification.'
The Lowdown
A new 'Tokenomics' tool by anabranch has unveiled a significant, unannounced cost increase for users of Anthropic's latest AI model, Opus 4.7. The tool, designed to compare the token consumption and cost between Opus 4.6 and 4.7 for identical inputs, demonstrates a substantial uptick in resource usage.
- The community average shows a 38.2% increase in both request tokens and their associated cost when moving from Opus 4.6 to 4.7.
- Individual submissions reveal these increases can range from a more modest 26.3% to a hefty 48.8% for the same tasks.
- This 'inflation' is attributed to Opus 4.7's new tokenizer, which, despite maintaining the same per-token pricing, converts input text into a higher number of tokens, effectively raising the price for users.
- The study emphasizes that while the new tokenizer is designed to improve performance, the immediate impact is a more expensive service without clear communication about the increased token counts.
The findings suggest that Anthropic has effectively implemented a price increase through technical means, leaving users to grapple with rapidly depleted usage limits and questioning the true value of the 'upgrade.'
The Gossip
Token Tally Troubles
Users are reporting dramatically faster consumption of their allocated tokens and hitting usage limits much quicker with Opus 4.7. Many found their daily or weekly caps exhausted after just a handful of prompts, even for minor tasks. This direct financial impact is causing significant frustration and forcing a reevaluation of workflows.
Open Options & Oligopoly Outcry
A prevalent theme is the growing disillusionment with proprietary AI models like Claude, leading many to consider or switch to open-source alternatives. Commenters express concern over hard dependencies on multi-billion dollar companies, 'enshittification' as AI providers raise prices, and the potential for a market oligopoly. They hope for open models to become the 'Linux of LLMs,' fostering a more democratic and less costly AI ecosystem.
Quality Quandaries & Value Vexations
The discussion often revolves around whether the increased cost of Opus 4.7 is justified by its performance. While some users find 4.7 to be an improvement, others report it feeling 'dumber' or 'lobotomized,' requiring more prompting or overthinking tasks, leading to a poorer return on investment. There's speculation about a potential conflict of interest, where models might be incentivized to be less efficient to encourage higher token spending.
Skill Erosion & AI Aid Ambivalence
The broader impact of LLMs on human skills and learning is a contentious topic. Some argue that AI aids learning, allowing faster exploration of new domains and increased productivity. Conversely, others fear 'skill atrophy' and 'apathy,' where developers might rely on AI without truly understanding the underlying concepts, potentially leading to lower quality work or a diminished capacity for independent problem-solving.