HN
Today

€54k spike in 13h from unrestricted Firebase browser key accessing Gemini APIs

A developer faced a shocking €54,000 bill in just 13 hours after an unrestricted Firebase browser key was exploited to access Google's Gemini APIs. This incident highlights critical flaws in cloud billing safeguards and Google's historically ambiguous stance on API key security. Hacker News debates the lack of hard spending caps, the motives of the attackers, and Google's perceived indifference to developers' financial precarity.

187
Score
119
Comments
#1
Highest Rank
7h
on Front Page
First Seen
Apr 16, 12:00 PM
Last Seen
Apr 16, 6:00 PM
Rank Over Time
11315212528

The Lowdown

A Firebase project owner was blindsided by an unexpected €54,000+ charge to their Google Cloud account for Gemini API usage over a mere 13 hours. This massive spike occurred shortly after they enabled Firebase AI Logic for a simple web snippet generation feature.

  • The project, initially used only for Firebase Authentication, experienced automated traffic unrelated to legitimate user activity.
  • Despite having an €80 budget alert and a cost anomaly alert, both triggered with significant delays, by which point costs had already soared past €28,000, eventually settling at €54,000+ due to reporting lags.
  • Google Cloud support denied a billing adjustment, classifying the usage as valid because it originated from the project, despite clear evidence of anomalous activity.
  • The developer is seeking guidance on preventing similar incidents, exploring safeguards beyond App Check and moving calls server-side, and inquiring about potential escalation paths for such disputed charges. This incident underscores the precarious nature of cloud billing for developers, particularly with AI-related services, and raises serious questions about platform providers' responsibilities in preventing financial catastrophes for their users.

The Gossip

Billing Blunders and Budgetary Barriers

Many commenters expressed frustration and disbelief over the major cloud providers' (Google, AWS, Azure) apparent lack of effective hard spending caps, instead relying on delayed alerts. This 'pay-as-you-go' model is seen as a trap, especially with AI services where costs can explode rapidly. There's a strong sentiment that major providers deliberately avoid implementing immediate kill-switches or prepaid options, prioritizing revenue over user protection, while smaller providers often offer better controls.

API Key Conundrums and Compromises

A central theme revolves around the historical understanding of Google API keys as 'non-secrets' (especially for client-side Firebase/Maps usage) versus their current role in authenticating to sensitive Gemini AI services. Commenters note Google's shifting guidelines and perceived lack of clarity, leading to vulnerabilities where old, publicly exposed keys can now incur massive AI bills. The discussion also touches on the difficulty of restricting API keys effectively to prevent such misuse.

Exploiting Engines for Enormous Expenditures

The community debated the motivations behind such a rapid and expensive exploitation of AI APIs. Suggestions included using the LLM for spam generation (to bypass filters), brute-forcing problems, running 'LLM as a service' with stolen keys, or even state-sponsored malicious activities like 'economic destruction' through distributed damage. Distillation (training smaller models with a larger one) was also proposed as a potential payoff for token-intensive usage.

Google's Guilt and Governance

Many comments lambasted Google for its handling of the situation, calling it an 'anti-feature' and a 'trap.' There's a strong belief that Google's billing system is intentionally designed to allow such overruns, and that a company specializing in AI should be able to detect and prevent such anomalous usage. Some critics went further, suggesting this is Google's 'worst security incident' due to its potential impact on user data (via exposed keys) and the company's perceived lack of transparency or proactive solutions.