Supported LLMs
For interactive sessions, you can directly use a LLM model from our supported llm list:
Provider | Model Name |
---|---|
OpenAI | gpt-4o-mini |
gpt-4o-mini | |
gpt-4.1-mini | |
gpt-4.1-nano | |
o3-mini | |
gemini-2.0-flash | |
gemini-2.0-flash-lite | |
gemini-2.5-flash | |
gemini-2.5-flash-lite-preview-06-17 |
We provide premium access to the above supported LLMs from corresponding provider, and the usage of these LLM is already bundled in our api pricing. You don't need to separately sign up for these LLMs.
Bring Your Own LLM
You can also use a custom LLM if you have requirements on the LLM model. Our interactive model currently supports any LLMs that are served with OpenAI compatible API for chat completion, which is now widely adopted by all major inference and serving frameworks like vLLM, llama.cpp and Ollama.
If you would like to use a custom LLM, drop us a message at our Discord.