Show HN: Now I Get It – Translate scientific papers into interactive webpages
This Show HN introduces "Now I Get It," an AI-powered tool that converts dense scientific papers into interactive, layperson-friendly web pages. It aims to democratize access to complex research, serving as both a triage tool for experts and an educational bridge for general audiences. The app sparked lively discussion on its utility, AI accuracy, and the business model for LLM-wrapped services, quickly hitting its daily usage cap.
The Lowdown
"Now I Get It" is a new web application designed to simplify the daunting task of understanding scientific papers. By leveraging advanced Large Language Models (LLMs), the app transforms PDF scientific articles into interactive web pages that explain key concepts in plain language.
- Users upload a scientific PDF, and the service generates a shareable, interactive webpage summarizing the article's highlights.
- The creator, jbdamask, developed it as a "pure convenience app" for personal use and for colleagues across scientific fields, aiming to reduce the hours typically needed to digest detailed papers.
- It's also a platform for experimenting with LLMs to "translate scientific articles into software," utilizing "agentic engineering" principles.
- The service is currently free but operates with a daily cap (initially 20, then 100 articles) to manage computational costs associated with LLM usage.
- The app's interface showcases a gallery of processed papers, demonstrating its ability to handle diverse topics from retinal vessel segmentation to philosophical sketches.
This tool highlights a practical application of AI to a common academic bottleneck, aiming to make scientific knowledge more accessible to a broader audience.
The Gossip
Limit Lamentations and Pricing Ponderings
Many users eagerly tried the app but quickly encountered the daily processing limit or page count restrictions, leading to disappointment. The creator explained the cap is due to the cost of running LLMs and expressed openness to different monetization models, such as a cost-plus approach or a donation button to enable more processing. The discussion touched on how to balance free access with sustainability.
Accuracy Assessments and AI Anomalies
A critical theme emerged around the quality and accuracy of the AI-generated summaries. Some users pointed out instances where the explanations were either overly simplified, unhelpful for deep understanding, or contained what appeared to be 'hallucinated' information, such as a chart not present in the original paper. The creator acknowledged that LLMs, even the best ones, are still 'hit or miss' regarding quality but are constantly improving.
Feature Fanfare and Functional Futures
Commenters offered a plethora of suggestions for enhancing the app's functionality. Ideas included implementing a 'Deep Research' button to pull in and integrate cited sources, adding social sharing previews with appropriate metadata, offering a light mode UI, and supporting different input formats beyond PDFs (like EPUB). The potential for using the tool to summarize company internal documentation was also raised.
Model Musings and Market Matters
The discussion branched into broader philosophical and business questions surrounding LLM-powered applications. Some expressed concern that such 'convenience apps,' primarily wrapping powerful foundational models, might limit opportunities for specialized innovation, while others viewed it as a new avenue for creativity. The economics of monetizing these services, given the underlying LLM costs, was a recurring sub-theme, contrasting subscription models with alternative pricing strategies.