Show HN: Respectify – a comment moderator that teaches people to argue better
Respectify, an AI-powered comment moderator, aims to elevate online discourse by teaching users to argue better, identifying logical fallacies, tone issues, and even 'dog whistles.' While the promise of fostering healthier communication intrigued the HN crowd, early demos sparked significant debate.
The Lowdown
Respectify is a new AI-driven platform developed by internet veterans David Millington and Nick Hodges, designed to improve the quality of online discussions. Frustrated by the state of internet comments, they built a tool that moves beyond simple deletion and banning, aspiring to educate users on better communication practices.
- Moderation with Education: Instead of just rejecting bad comments, Respectify provides feedback to the commenter, explaining issues like logical fallacies, inappropriate tone, irrelevance, or the use of coded language, allowing them to edit and resubmit.
- Configurable and Automated: The system is highly tunable, allowing site owners to adjust sensitivity levels for various checks, from permissive to highly stringent, with the goal of automating moderation entirely.
- Specific Issue Detection: It targets common problems such as logical fallacies (e.g., false dichotomy, strawman), tone issues, off-topic comments, low-effort posts, and subtle 'dog whistles' or coded language.
- Spam Innovation: Respectify uses AI to understand context and intent, aiming to catch sophisticated spam more effectively than traditional methods.
The creators hope that by integrating moderation with an educational component, Respectify can quietly coach better thinking over time, leading to more productive and respectful online environments.
The Gossip
Dog Whistle Debacles
The demo's 'dog whistle' detection feature quickly became a lightning rod for criticism, with users sharing instances where seemingly innocuous comments were flagged. For example, stating 'Die Hard' features a 'Christmas party' was deemed a 'dog whistle' hinting at Christian themes, and challenging comments on Universal Basic Income were flagged for negative tone or dog whistles. This led to strong concerns that the AI could easily be biased, stifle legitimate debate, and inadvertently create echo chambers by penalizing diverse or challenging viewpoints.
AI's Moderation Misfires
Beyond the controversial dog whistle detection, commenters questioned the AI's general effectiveness and the broader implications of automated language monitoring. One user found the spam detection surprisingly dependent on the presence of 'www' in a URL, suggesting a superficial understanding. Skepticism arose that AI moderation would simply lead to humans evolving new forms of coded language ('unalived' instead of 'killed'), making online discourse stranger rather than genuinely improving it, ironically creating a 'double plus good' scenario akin to Orwellian language control.
Developer Dialogue & Defensibility
The creators, particularly Nick Hodges, actively engaged with the Hacker News community, acknowledging feedback and expressing gratitude for users testing the demo. They suggested that users experiencing issues with overzealous flagging (like the dog whistle detection) could adjust the system's settings, though commenters noted that this configurability wouldn't be available to end-users of a site deploying Respectify. This interaction provided valuable insight into the developers' intent and willingness to refine the product based on community input, even when facing sharp critique.