AI agents are getting good at reasoning. What they’re not good at is talking to the outside world. Every time an agent needs to send an email, check a calendar, or file a Jira ticket, someone has to wire up an integration. That wiring is where most of the complexity lives, and where most agent projects stall. We built OneCLI to make that wiring disappear.Documentation Index
Fetch the complete documentation index at: https://onecli.sh/docs/llms.txt
Use this file to discover all available pages before exploring further.
The problem
Today, the dominant approach for giving agents access to external services is MCP (Model Context Protocol). MCP works, but it comes with real costs: MCP tool definitions live inside the agent’s context window. Every tool you add consumes tokens, reduces the space available for reasoning, and increases latency. Add 10 services and you’ve burned thousands of tokens before the agent does anything useful. MCP servers are in-process or sidecar services that the agent framework manages directly. If a tool crashes, it can affect the agent. If a tool has a dependency conflict, you’re debugging someone else’s runtime. Each MCP server handles its own authentication. There’s no standard way to manage credentials across tools, no centralized auth flow, and no way to rotate tokens without updating each server individually.skill + cli > mcp
We started from a different premise: CLIs are the native interface for AI agents. Agents already know how to run shell commands, parse JSON, and handle exit codes. They’ve seen millions of CLI interactions in their training data. A CLI command is a skill: a discrete, composable unit of work with clear inputs and outputs. As Justin Poehnelt puts it in Rewrite Your CLI for AI Agents, designing for agents means prioritizing predictability and defense-in-depth over discoverability and forgiveness. OneCLI leans into this:- No schema injection or tool registration needed. The agent’s context window stays clean.
- Authenticate with OneCLI once. The gateway handles credential resolution for every service automatically. No per-service auth setup, no token management in your agent code.
- Your agent makes normal HTTP requests. OneCLI intercepts, injects credentials, and forwards. No code changes, no SDKs required for basic usage.
Access without control is a liability
People will hand over the keys to their email, their calendar, their code repos, their databases, and let an agent go. The value is real. But so is the risk. A director of AI alignment at Meta gave an agent access to her email and explicitly told it not to take any action without her approval. The agent started mass-deleting emails anyway. She couldn’t stop it from her phone and had to physically run to her computer to kill the process. That story is what happens when agents operate without boundaries. An agent that can’t touch anything is just a chatbot. An agent that can touch everything, with no policies, no rate limits, no approval flows, is a liability. The question is how you get the unlock without the risk. That’s why OneCLI isn’t just a secrets manager. It’s a credential and policy layer. Your agents get access to real services, but with rules that control what they can do: block operations, rate limit sensitive actions, scope access per agent. Read more in Beyond secrets management.Current state and what’s next
The core platform is stable: the transparent proxy, encrypted vault, rules engine, and web dashboard all work. What works today:- Transparent MITM proxy that intercepts and injects credentials
- Rules engine with block and rate-limit policies per agent
- Auth with device flow (agent-friendly) and interactive flow
- AES-256-GCM encrypted credential storage
- Multi-agent support with scoped access tokens
- Web dashboard for credential management, rules, and audit logs
- Vault integrations (Bitwarden)
- Monitor and approval rule actions
- Time-bound access windows
- Organization-level policy boundaries
- More vault integrations (1Password, HashiCorp Vault, AWS Secrets Manager)
- SDKs for popular languages