The Future of Everything Is Lies, I Guess: New Jobs
Aphyr's latest installment in his 'Future of Everything Is Lies' series speculates on the wild new job roles emerging at the volatile intersection of human and AI systems. It explores the surprising human-centric tasks required to manage AI's inherent flaws and accountability gaps, prompting Hacker News to ponder the practical, ethical, and semantic implications of these future careers.
The Lowdown
Aphyr's article, part nine of a ten-part series, delves into the fascinating and often unsettling new job categories that are likely to arise as Machine Learning (ML) systems become more pervasive. These roles primarily exist at the boundary where human intelligence and accountability must interface with the unpredictable nature of AI, particularly Large Language Models (LLMs).
- Incanters: These individuals specialize in prompting and coaxing LLMs to produce desired results, mastering the art of interacting with their peculiar and often temperamental behavior.
- Process Engineers: Tasked with designing and implementing quality control mechanisms to prevent AI-generated errors (e.g., hallucinations in legal documents) from making it into the real world.
- Statistical Engineers: Professionals focused on measuring, modeling, and controlling the variability and biases inherent in ML systems, similar to psychometricians studying human behavior.
- Model Trainers: Human experts hired to curate high-quality training data, develop benchmarks, and meticulously evaluate AI outputs to prevent models from being corrupted by misinformation.
- Meat Shields: Humans who absorb accountability and legal liability when ML systems inevitably fail, providing a "warm body" for apologies or legal consequences that an AI cannot.
- Haruspices: Individuals responsible for investigating ML system failures, sifting through inputs, outputs, and internal states to understand why a model behaved incorrectly.
Ultimately, the piece argues that while AI may automate many tasks, it simultaneously creates new, often challenging, human-centric roles focused on mitigating AI's shortcomings and ensuring its responsible operation.
The Gossip
UK Content Conundrum
A recurring discussion noted that aphyr's site was geo-blocked in the UK, leading to a predictable pattern of users asking why, providing archive links, and then explaining the likely cause: the UK Online Safety Act possibly impacting sites with any NSFW content. This meta-discussion highlighted frustration with the repetitive nature of this thread on every post from the series.
Meat Shield Metaphor Mayhem
The term 'meat shields' sparked debate. Some commenters found it dehumanizing and sociopathic, questioning the author's choice of language. Others defended the term, arguing it is an apt and commonly understood analogy for individuals designated to absorb blame or liability, especially in the context of AI failures where corporations need a human face for accountability.
Taxonomic Takes
One prominent comment challenged the article's proposed taxonomy of new jobs, suggesting it leaned towards 'magical thinking.' The commenter, who identifies with the 'incanter' role, argued that some proposed human tasks, particularly those of the 'haruspex,' could and likely would be automated by advanced LLMs themselves, pointing to the field of mechanical interpretability.