The Future of Everything Is Lies, I Guess: Annoyances
Aphyr's latest installment in "The Future of Everything is Lies" predicts a future where AI, especially LLMs, becomes a pervasive source of annoyance in customer service, diffuses accountability, and instigates an exhausting "agentic commerce" arms race. This critical take resonates with Hacker News' audience, who often express skepticism about corporate use of AI and the erosion of human interaction. The piece skillfully details how these systems will be designed to frustrate and confuse, turning everyday transactions into battlegrounds against inscrutable algorithms.
The Lowdown
This article, part of a larger series titled "The Future of Everything Is Lies, I Guess," delves into the myriad "annoyances" that will arise from the widespread deployment of machine learning technologies, particularly Large Language Models (LLMs). It posits that these systems will fundamentally alter our daily interactions with companies and commerce, often for the worse, turning mundane tasks into frustrating battles against algorithms.
- Customer Service Degradation: The author predicts that companies will increasingly divert customer service to LLM-powered chatbots, making it significantly harder to reach human support. These AI systems, prioritizing cost-effectiveness over problem resolution, will lead to frustrating loops, potential misinformation, and an inability to address complex issues, with human interaction becoming a luxury reserved for high-value customers.
- Algorithmic Drudgery: Beyond customer service, LLMs will permeate various "fuzzy" decision-making processes, from pricing to insurance claims. This will necessitate a new form of digital drudgery for individuals, who will have to learn to game algorithms (e.g., trying different browsers for flight prices, specific phrasing for medical claims) just to navigate everyday life, leading to a "dismal future" of constant argumentation with machines.
- Diffusion of Responsibility: The increasing complexity and opacity of ML systems will further diffuse accountability when errors or harm occur. Drawing parallels to sociotechnical system failures, the author argues that identifying responsibility for issues caused by LLMs (e.g., wrongful arrests, biased financial decisions) will become nearly impossible, despite human involvement at various stages of their development and deployment.
- Agentic Commerce Arms Race: The rise of "agentic commerce," where LLMs make purchasing decisions on behalf of consumers, will trigger an advertising and manipulation arms race. Companies will develop tactics to influence consumer LLMs through "ads for LLMs" and SEO-like strategies, while consumer LLMs will engage in aggressive negotiations, creating bizarre, complex, and fraud-prone digital interactions rife with dark patterns.
- Societal Impact: The article concludes by painting a pessimistic picture of an "obnoxious equilibrium" where individuals are compelled to deploy their own LLMs to contend with this new environment. This could lead to increased fraud, complex adjudication processes, and higher costs, with only the wealthy potentially able to bypass this digital gauntlet.
Ultimately, the article foresees a future where ML technologies, driven by profit motives, create pervasive digital annoyances, erode accountability, and transform everyday transactions into frustrating, complex, and often unfair experiences.