HN
Today

Relicensing with AI-Assisted Rewrite

An open-source project controversially used an AI to rewrite its entire codebase, attempting to relicense from LGPL to MIT, a notoriously difficult feat. This AI-assisted rewrite has ignited a complex debate regarding copyright, "clean room" implementations, and potential GPL violations. The case highlights a significant legal paradox for AI-generated code and could profoundly impact the future of copyleft licensing, making it a critical discussion for the software world.

5
Score
1
Comments
#3
Highest Rank
16h
on Front Page
First Seen
Mar 5, 5:00 AM
Last Seen
Mar 5, 9:00 PM
Rank Over Time
832543334568918212225

The Lowdown

The open-source project chardet recently attempted a relicense from LGPL to MIT by employing Claude Code, an AI, to rewrite its entire codebase. This move, intended to simplify licensing for corporate users, has instead ignited a complex legal and ethical debate concerning copyright, the nature of derivative works, and the very foundation of open-source licensing in the age of artificial intelligence. It represents a real-world stress test for intellectual property law in the era of generative AI.

  • Relicensing Challenges: Traditionally, relicensing open-source projects demands the unanimous consent of all past contributors, an often insurmountable hurdle for mature projects.
  • AI-Assisted Rewrite for Relicensing: chardet's maintainers utilized Claude Code to rewrite their C++ port, subsequently releasing version 7.0.0 under the more permissive MIT license, explicitly aiming to circumvent the original LGPL restrictions.
  • Derivative Work Controversy: The original author, a2mark, vehemently argues this rewrite constitutes a GPL violation. The contention is that even with AI involvement, prior exposure to the original LGPL code makes the new version a derivative work, thus failing to meet the criteria for a "clean room" implementation.
  • The "Clean Room" Dilemma: A traditional "clean room" rewrite mandates two distinct teams (one for specification, one for implementation) to avoid derivative claims. The use of an AI prompted with the original code is seen as bypassing this crucial legal firewall, potentially making the AI output a derivative.
  • Supreme Court Paradox: Coinciding with this controversy, the U.S. Supreme Court declined to hear an appeal regarding copyright for AI-generated material, effectively solidifying a "Human Authorship" requirement. This creates a massive legal paradox for chardet's maintainers:
    • The Copyright Vacuum: If AI-generated code cannot be copyrighted, the maintainers may lack the legal standing to license v7.0.0 under MIT or any license.
    • The Derivative Trap: If the AI's output is deemed a derivative of the original LGPL code, the "rewrite" becomes a direct license violation.
    • The Ownership Void: If the code is truly a new, machine-created work, it could arguably fall into the public domain immediately upon generation, rendering any MIT license grant moot.
  • Threat to Copyleft: Accepting AI-assisted rewrites as a valid relicensing pathway could fundamentally undermine copyleft licenses. Developers might leverage LLMs to "rewrite" GPL-licensed projects into permissive MIT licenses, effectively dismantling the protective mechanisms of copyleft.

The chardet v7.0.0 case thus stands as a pivotal real-world test for the evolving legal and ethical boundaries of AI in software development, particularly its profound implications for open-source licensing models and the future of intellectual property in a rapidly automating world.