I'm scared about biological computing
An AI developer shares their profound unease about biological computing, specifically lab-grown neurons trained to play Doom, and questions the blurry lines of consciousness. This provocative thought experiment sparked a lively Hacker News debate on defining awareness, the scientific validity of the experiments, and the societal impact of information consumption.
The Lowdown
The author, an experienced AI developer, describes a growing sense of dread stemming from recent advancements in biological computing. While comfortable with the mathematical underpinnings of traditional AI, they find the concept of training literal human neurons to perform tasks deeply unsettling.
- The catalyst for this fear was a video showcasing lab-grown neurons successfully trained to play the video game Doom, seemingly outperforming even the author.
- This led to a core philosophical dilemma: If AI is discounted as unconscious because it's merely a "next token predictor," what does it mean for neurons that react to visual data, interpret it, and learn? Are they "seeing" and potentially conscious?
- The author points out that 200,000 neurons, while seemingly few, are more than simpler organisms like jellyfish or worms, prompting the question: Where do we draw the line for consciousness?
- They express a dystopian vision where commercial incentives will inevitably drive further development of bio-computing, regardless of ethical concerns, fearing the creation of "hells" for these nascent biological systems.
- The piece concludes with the author's discomfort that this profound ethical and philosophical issue isn't being discussed more widely.
The author's "MindDump" serves as an open-ended reflection, acknowledging the lack of definitive answers but emphasizing the urgent need for collective contemplation on the implications of biological computing.
The Gossip
Consciousness Conundrums
The most significant discussion revolves around the definition of consciousness and its ethical implications. Many commenters grapple with where to draw the line: If human neurons in a dish can learn, are they conscious? Comparisons are made to LLMs, animals, and even inanimate objects. The "Pig that Wants to be Eaten" thought experiment is frequently cited, highlighting the ethical dilemmas of creating beings designed to be exploited. Some argue that without a clear scientific definition, any system could theoretically be considered conscious, while others dismiss the idea of consciousness in such simple biological systems or even in humans themselves.
Skepticism and Scientific Scrutiny
A substantial portion of the comments critically examine the scientific claims and the author's interpretation of the Doom-playing neurons. Several users point out that the neuronal setup involves a significant PyTorch stack and a convnet-based encoder, suggesting that the neurons are not directly "seeing" or autonomously learning in the way the author implies. They argue that the "learning" might be heavily scaffolded by the digital AI, reducing the biological component to a sophisticated noise generator or a reflex circuit rather than a conscious entity. The author responds, clarifying their understanding of the underlying paper.
The YouTube Effect
Sparked by a commenter's critique that the author's fears were exaggerated due to consuming YouTube content rather than academic papers, a tangent emerged about the nature of information consumption. This discussion debated whether platforms like YouTube inherently lead to misinformation or if the problem lies in a lack of critical thinking, regardless of the medium. Commenters shared personal anecdotes about how different media affect their understanding and highlighted both the pitfalls and potential benefits of platforms like YouTube for learning.