Provider keys
Hyponema is provider-neutral. Teams can bring their own STT, LLM, and TTS credentials.
Add a key in Hyponema
Section titled “Add a key in Hyponema”Open Settings > Provider keys, paste the provider secret into the matching row, and save. Hyponema never returns the plaintext secret after save.
One credential can power multiple runtime providers. For example, the OpenAI key is used for OpenAI GPT, OpenAI Speech, and OpenAI TTS; the Groq key is used for Groq LLM, Groq STT, and Groq TTS.
| Hyponema row | Runtime providers that use it | Paste this secret |
|---|---|---|
| Anthropic | Anthropic Claude LLM | Anthropic API key |
| OpenAI | OpenAI GPT LLM, OpenAI Speech STT, OpenAI TTS | OpenAI secret API key |
| Google Gemini LLM | Gemini API key from Google AI Studio | |
| Google Cloud | Google Cloud Speech-to-Text, Google Cloud Text-to-Speech | Full Google Cloud service-account JSON key |
| Groq | Groq LLM, Groq Whisper STT, Groq TTS | Groq API key |
| Mistral | Mistral LLM | Mistral API key |
| OpenRouter | OpenRouter LLM | OpenRouter API key |
| Together AI | Together AI LLM | Together AI project API key |
| Deepgram | Deepgram STT, Deepgram Aura TTS | Deepgram project API key |
| AssemblyAI | AssemblyAI STT | AssemblyAI API key |
| Cartesia | Cartesia TTS, Cartesia voice cloning | Cartesia API key |
| ElevenLabs | ElevenLabs STT, ElevenLabs TTS, ElevenLabs voice cloning | ElevenLabs API key |
Custom LLM endpoints do not use a shared provider-key row. Configure the endpoint URL, model, and auth header on the agent’s custom LLM settings.
Get provider keys
Section titled “Get provider keys”Use the provider’s own dashboard or console to create a production key for Hyponema.
| Provider | Current official setup path | Hyponema notes |
|---|---|---|
| OpenAI | Create or view a secret key on the OpenAI API key page. | Paste it into OpenAI. |
| Anthropic | Use the Claude Console, then generate API keys in Account Settings. | Paste it into Anthropic. |
| Google Gemini | Create or manage Gemini API keys in Google AI Studio. | Paste it into Google. Do not paste a Google Cloud service-account JSON here. |
| Google Cloud | In Google Cloud IAM, create a service-account key in JSON format for a service account with access to Speech-to-Text and/or Text-to-Speech. | Paste the entire downloaded JSON file contents into Google Cloud. This is different from a Gemini API key. |
| Groq | Create a key from Groq Console API Keys. | Paste it into Groq. |
| Mistral | Open Studio API keys, create a new key, and copy it immediately. | Paste it into Mistral. |
| OpenRouter | Create an OpenRouter key and optionally set a credit limit. | Paste it into OpenRouter. |
| Together AI | Open the project API-key settings, create a key, and copy it immediately. | Paste it into Together AI. |
| Deepgram | In the Deepgram Console, select the project, open Settings > API Keys, and create a key. | Paste it into Deepgram. |
| AssemblyAI | Find the API key in the AssemblyAI dashboard. | Paste it into AssemblyAI. |
| Cartesia | Create a standard API key at Cartesia’s key page. | Paste it into Cartesia. Use a standard API key, not an admin key. |
| ElevenLabs | Create an API key in the ElevenLabs dashboard. | Paste it into ElevenLabs. If you restrict the key, allow the STT, TTS, and voice endpoints you plan to use. |
Credential storage
Section titled “Credential storage”Provider credentials are stored with AES-256-GCM envelope encryption. Each credential is encrypted with a data-encryption key, and the data-encryption key is protected by a host key-encryption key provider.
Production hosts should use a KMS-backed key-encryption key provider.
Runtime resolution
Section titled “Runtime resolution”At runtime, Hyponema resolves credentials from encrypted workspace storage first and can fall back to host environment credentials when configured.
Operational guidance
Section titled “Operational guidance”- Use least-privilege provider keys where the provider supports it.
- Rotate keys when staff or vendor access changes.
- Keep provider projects separate for production and test workspaces.
- Review traces when changing providers; latency, token usage, and error behavior can change.