Show HN: OneCLI – Vault for AI Agents in Rust
OneCLI introduces an open-source Rust gateway designed to secure AI agents by preventing them from directly handling sensitive API keys. This solution addresses a critical security vulnerability in the rapidly evolving AI ecosystem, leveraging a proxy pattern to inject credentials transparently. Hacker News found it compelling as it applies established security best practices to the novel challenges of autonomous agents, sparking discussions on its effectiveness and alternative approaches.
The Lowdown
OneCLI is an open-source gateway built in Rust to tackle the prevalent security issue of AI agents being given direct access to raw API keys. Recognizing that prohibiting agents from accessing services isn't a viable long-term solution, OneCLI aims to provide a secure intermediary that allows agents to interact with services without ever exposing the actual credentials.
- The Problem: AI agents often require access to numerous external services, and granting them raw API keys creates a significant attack surface and risk of credential leakage.
- The Solution: OneCLI acts as a transparent proxy. Users store their real API credentials in OneCLI's encrypted vault, and agents are configured with placeholder keys.
- How it Works: When an AI agent makes an HTTP call through the OneCLI proxy, the system intercepts the request. It then matches the request against predefined host/path patterns, verifies the agent's access rights, decrypts and swaps the placeholder key for the real credential, and forwards the modified request to the target service. The agent never directly handles the sensitive secret.
- Technical Details: The gateway is written in Rust for performance and memory safety, while the management dashboard uses Next.js. Secrets are protected with AES-256-GCM encryption at rest. The entire system runs self-contained within a single Docker container, including an embedded PostgreSQL database (PGlite), making it easy to deploy without external dependencies.
- Future Vision: The developers plan to expand OneCLI's capabilities to include comprehensive access policies, detailed audit logging, and the requirement for human approval before sensitive actions are executed.
By centralizing secret management and enforcing access control at the proxy level, OneCLI seeks to mitigate the 'secret sprawl' problem, offering a more secure operational environment for AI agent deployments.
The Gossip
Proxying Principles for AI Purposes
The discussion immediately turned to whether OneCLI reinvents the wheel, as proxy-based credential management is a long-standing security pattern. Some commenters pointed to existing solutions like Hashicorp Vault, AWS Secrets Manager, or other open-source proxies, suggesting they could serve a similar purpose. However, others contended that OneCLI's specific focus on AI agents, particularly its ability to enforce fine-grained access policies on APIs that don't natively support them, addresses a unique and urgent need, simplifying adoption for this new domain. The argument was made that while the underlying tech is familiar, the application to AI agents and the bespoke features it offers reduce friction.
Security Scope Scrutiny
A key debate revolved around the actual security benefits of OneCLI. While it prevents agents from seeing raw keys, critics questioned if it truly limits the 'blast radius' if an agent can still make arbitrary (but authorized) calls with the swapped credentials, potentially leading to a 'false sense of security.' Proponents, however, emphasized that the system's value lies in its ability to enforce granular access control and provide audit trails. This allows for per-request policy enforcement, the use of short-lived, task-specific tokens, and the potential for human approval before sensitive actions, thereby significantly reducing credential sprawl and the risks associated with autonomous misuse.
Intricate Implementation Insights
Commenters shared practical insights and challenges encountered when implementing similar proxy-based solutions. Technical discussions included how to handle TLS-encrypted traffic, typically via Man-in-the-Middle (MITM) proxying with injected certificates, and the difficulties with specific client environments (e.g., Node.js not always respecting the `HTTP_PROXY` environment variable). The complexity of re-signing requests for services like AWS (SigV4/SigV4A) was also highlighted as a potential hurdle. Additionally, concerns were raised about whether OneCLI's 'fake keys' might trigger false positives in enterprise secret scanning tools, prompting a developer response about placeholder adjustability.