HN
Today

Why does AI tell you to use Terminal so much?

This piece delves into why AI-powered assistants like ChatGPT frequently recommend Terminal commands for Mac troubleshooting, even when more user-friendly GUI options exist. The author dissects several examples of AI-generated advice, revealing its inaccuracies, lack of safeguards, and potential to mislead users. It highlights a significant usability and safety gap in AI's current practical advice for everyday computer users.

5
Score
0
Comments
#2
Highest Rank
1h
on Front Page
First Seen
Mar 11, 8:00 AM
Last Seen
Mar 11, 8:00 AM

The Lowdown

The article critiques the stark difference between AI's and human's troubleshooting advice for Macs, noting AI's overwhelming preference for Terminal commands over graphical user interface (GUI) applications. This tendency is attributed to Large Language Models' (LLMs) token-based nature, which makes verbalizing complex GUI interactions difficult, contrasting with the ease of representing command-line instructions.

Key issues identified with AI's command-line advice include:

  • Users learn little beyond rote command execution, gaining no real understanding or insight.
  • Terminal commands often lack the safety features built into GUI apps, increasing the risk of damage or misconfiguration.
  • Command output can be excessively verbose, making it difficult to parse.
  • Blindly copy-pasting commands from AI is a known vector for malware installation.

The author provides a detailed critique of ChatGPT's recommended steps for detecting malicious software on a Mac, exposing numerous inaccuracies:

  • ChatGPT's log show commands for XProtect history and remediation events misrepresent log data sources, use overly broad predicates, and assume log retention periods that are impractical or impossible.
  • Its method for verifying XProtect definition updates via system_profiler is deemed useless, yielding no actionable version or date information.
  • A suggestion to list XProtect Remediator modules is shown to be an unnecessary command, providing no actual security validation.

In conclusion, the article argues that AI's Mac troubleshooting recommendations are predominantly command-line oriented, despite more accessible GUI alternatives. Critically, these AI-generated procedures often fail to achieve their stated goals, offer no educational value, and pose a significant risk by encouraging users to execute potentially dangerous or misleading commands without comprehension.