Can I Run AI locally?
CanIRun.ai offers a web-based utility to estimate if your machine can run various AI models locally, simplifying a complex hardware compatibility challenge. This tool taps into the widespread interest in local AI inference, resonating with users who remember similar system requirement checkers for video games. The Hacker News community appreciates the concept but is quick to offer detailed feedback on data accuracy and feature expansions.
The Lowdown
CanIRun.ai is a practical web application designed to help users determine their system's capability to run different AI models locally. It provides estimated performance metrics (tokens per second) based on selected hardware configurations.
- Users interact with the tool by choosing their CPU, GPU, and RAM specifications from dropdown menus.
- The platform then presents a list of AI models, indicating their estimated performance on the chosen hardware.
- It leverages WebGPU for some estimations, while acknowledging that actual performance may vary from these browser-based predictions.
- The tool aims to demystify the hardware requirements for local AI model execution, a common pain point for developers and enthusiasts.
By offering a straightforward interface to assess hardware compatibility, CanIRun.ai serves as a valuable preliminary guide for individuals contemplating running AI models on their personal computers.
The Gossip
Data Deficiencies & Desired Directions
Commenters quickly pointed out gaps in the tool's hardware database and suggested critical feature additions. Many noted the absence of options for higher RAM configurations (e.g., 256GB for M3 Ultra) or specific professional GPUs (like RTX Pro 6000). Users also expressed a desire to input their existing RAM or reverse the lookup process, selecting a model first to see compatible hardware, and even to include low-power devices like Raspberry Pi.
Pessimistic Predictions & Pre-existing Parallels
Several users found the tool's performance estimates to be overly conservative or significantly inaccurate when compared to their real-world experience, with some claiming the numbers were 'at least one magnitude off.' The discussion also drew parallels to classic 'Can I Run It' sites for video games and compared the tool to existing command-line utilities like `llmfit`, noting that `llmfit`'s automatic system detection offers a notable advantage.
Unquestionable Utility & Uncommon Use-cases
Despite the critiques, the overall sentiment regarding the tool's concept was highly positive. Many expressed that it's a 'cool thing' or 'awesome' and something they've long wanted to simplify the process of understanding local AI model requirements. Some even extended its utility to broader scenarios, such as aiding hardware purchasing decisions or exploring solutions for running models on a separate, dedicated machine.