Skip to main content

Documentation Index

Fetch the complete documentation index at: https://onecli.sh/docs/llms.txt

Use this file to discover all available pages before exploring further.

Overview

OneCLI injects your OpenAI API key into requests to api.openai.com automatically. Agents make standard HTTP requests to the OpenAI API; the gateway adds the Authorization: Bearer header before forwarding. This lets agents use GPT models, DALL-E, embeddings, and other OpenAI services without ever seeing or handling your API key directly.

Setup

1

Go to Connections

Open the OneCLI dashboard and navigate to Connections > LLMs.
2

Add your key

Click Add secret and select OpenAI API Key. Paste your API key (starts with sk-).
3

Verify

OneCLI validates the key against the OpenAI /v1/models endpoint to confirm it works before saving.

How it works

  1. Your API key is encrypted and stored by OneCLI (AES-256-GCM at rest)
  2. When an agent sends a request to api.openai.com, the gateway intercepts it
  3. The gateway injects an Authorization: Bearer {key} header
  4. The request is forwarded to OpenAI
Agents never see the raw key. If the key is rotated, update it in the dashboard and all agents pick up the new key automatically.

Controlling access with rules

Use OneCLI’s rules engine to control how agents use your OpenAI key. For example, you can rate-limit requests, restrict agents to specific models by blocking certain paths, or flag high-cost operations for manual approval. Rules are evaluated before credential injection, so a blocked request never reaches OpenAI.