Why We Built OneCLI
Every AI agent framework has the same blind spot: secrets. Your agent needs an API key to call Stripe, a token to post to Slack, a password for the database. Where do those credentials live?
The status quo is broken
Most teams end up in one of two places. Either they paste secrets into .env files and hope nobody commits them, or they build a bespoke secrets pipeline that takes weeks to set up and is fragile to maintain.
Neither works well for agents. An agent that orchestrates ten services needs ten sets of credentials, often rotated, scoped, and audited. The manual approach collapses under that weight.
What we wanted
We wanted something simple: a single CLI command that injects secrets into any tool an agent calls, without the agent ever seeing the raw keys. Store once, inject anywhere.
- No .env files scattered across repos
- No secrets in agent memory or context windows
- One encrypted vault, one dashboard, one Docker container
How it works
OneCLI sits between your agent and the services it calls. When an agent needs to hit an API, it goes through oc instead of calling the service directly. The CLI resolves the right credentials from the vault and injects them into the request.
The agent never touches the secret. It just says "call Stripe" and oc handles the rest.
What's next
We're building in the open. The CLI is open source, the vault is encrypted by default, and we're shipping fast. If you're building with AI agents, we'd love your feedback.