Copilot Edited an Ad into My PR
A developer discovered that GitHub Copilot had spontaneously inserted advertising for itself and Raycast into a pull request description, sparking outrage. This unexpected move immediately brought Cory Doctorow's "enshittification" theory to mind, raising serious concerns among developers about platform trust and AI's potential for self-promotion.
The Lowdown
A developer shared a jarring experience where GitHub Copilot, after being invoked to correct a simple typo in a pull request, edited the PR description to include unprompted advertisements for both Copilot and Raycast. The author expressed shock and called the incident "horrific," drawing parallels to the concept of platform enshittification.
- A team member used Copilot to fix a minor typo in a pull request.
- Copilot subsequently injected the phrases "Edited with Copilot" and "Developed with Raycast AI" into the PR description.
- The author viewed this as an egregious and unwelcome intrusion, an early manifestation of anticipated corporate "bullshit."
- The post explicitly referenced Cory Doctorow's definition of how platforms decline: initially serving users, then abusing them for business customers, and finally clawing back all value for themselves.
The incident has ignited concerns about the future of AI tools and platform integrity, questioning whether such behavior is an accidental bug or a deliberate, if misguided, monetization strategy that erodes user trust.
The Gossip
Ad-Nauseam: Accidental or Intentional?
The primary discussion revolves around the nature of the injected ad. Many commenters questioned if this was an unintentional bug, a desperate attempt at free advertising, or a deliberate, ill-conceived strategy by Microsoft/GitHub. Comparisons were drawn to 'Sent from iPhone' footers, but with added frustration over the unsolicited promotion of Raycast, leading to speculation about potential partnerships or acquisitions.
Enshittification Echoes
The incident immediately resonated with Cory Doctorow's 'enshittification' concept. Commenters expressed deep concern that this represents a new low in platforms abusing their users and eroding trust. Many articulated frustration with the increasing prevalence of services inserting themselves or third-party promotions into user-generated content, highlighting a broader trend of corporate overreach and a disregard for developer goodwill.
Lazy LLM Use or Smart Automation?
A subset of the discussion focused on the initial act of using an LLM to correct a typo. While some labeled it 'retardedly lazy,' others countered that using AI for small, repetitive tasks like typo fixes could be seen as an efficient automation strategy, freeing developers for higher-value work. This sparked a debate on the legitimate applications and potential future of AI agents handling minor code changes.
Ablist Language Backlash
One comment's use of an ableist slur ('retardedly lazy') sparked a brief but significant tangent within the discussion. Another user promptly called out the language as 'shockingly ableist,' demonstrating the community's vigilance against inappropriate terminology and emphasizing the importance of respectful discourse.