Supported Models & Providers
Stackmint is model-agnostic. We orchestrate agents across multiple foundation models, inference platforms, cloud runtimes, and developer tools — so you can choose the right stack for every workflow: performance, cost, compliance, or on-premise constraints.
Using these logos does not imply a formal partnership or endorsement. They represent providers that Stackmint can integrate with based on publicly available APIs and customer configurations.
Foundation Models
openai
OpenAI
Foundation Model Provider
anthropic
Anthropic
Foundation Model Provider
claude
Claude
Foundation Model Provider
mistral
Mistral AI
Foundation Model Provider
deepseek
DeepSeek
Foundation Model Provider
qwen
Qwen
Foundation Model Provider
zhipu
Zhipu
Foundation Model Provider
gemma
Gemma (Google)
Foundation Model Provider
grok
Grok
Foundation Model Provider
groq
Groq
Foundation Model Provider
perplexity
Perplexity
Foundation Model Provider
meta-llama
Meta Llama
Foundation Model Provider
Multimodal & Creative Models
midjourney
Midjourney
Multimodal / Vision / Creative
stability
Stability
Multimodal / Vision / Creative
runway
Runway
Multimodal / Vision / Creative
luma
Luma
Multimodal / Vision / Creative
dreammachine
Dream Machine
Multimodal / Vision / Creative
suno
Suno
Multimodal / Vision / Creative
ideogram
Ideogram
Multimodal / Vision / Creative
sora
Sora
Multimodal / Vision / Creative
Inference & Model Hubs
huggingface
Hugging Face
Inference & Model Hub
hf-inference
Hugging Face Inference
Inference & Model Hub
replicate
Replicate
Inference & Model Hub
cohere
Cohere
Inference & Model Hub
Cloud / Infra Providers
aws
AWS
Cloud / Infrastructure
aws-bedrock
AWS Bedrock
Cloud / Infrastructure
azureai
Azure AI
Cloud / Infrastructure
microsoft
Microsoft
Cloud / Infrastructure
google
Google Cloud
Cloud / Infrastructure
google-gemini
Google Gemini
Cloud / Infrastructure
google-vertex
Google Vertex AI
Cloud / Infrastructure
nvidia
NVIDIA
Cloud / Infrastructure
cloudflare
Cloudflare
Cloud / Infrastructure
workersai
Cloudflare Workers AI
Cloud / Infrastructure
snowflake-color
Snowflake
Cloud / Infrastructure
vercel
Vercel
Cloud / Infrastructure
v0
v0.dev
Cloud / Infrastructure
Developer & Workflow Tools
langchain
LangChain
Developer Tools & Orchestration
langgraph
LangGraph
Developer Tools & Orchestration
langfuse
Langfuse
Developer Tools & Orchestration
llamaindex
LlamaIndex
Developer Tools & Orchestration
mcp
MCP
Developer Tools & Orchestration
Why Multi-Model Support Matters
Different workflows need different models. Some require top-tier reasoning, others need real-time latency, GPU-optimized inference, multimodal capabilities, or regional deployments. Stackmint lets you route agents to the best provider without rewriting Buds or Branches.
You can enable or disable providers per tenant or workspace. Stackmint handles orchestration, observability, retries, and consistency — while you stay in full control of your model and infrastructure choices.
Need a Provider We Don’t List Yet?
We add new providers frequently based on customer demand. If you’d like Stackmint to support any additional model, inference endpoint, or self-hosted runtime (including vLLM, TGI, SGLang, Ollama, or custom deployments), reach out:
builders@stackmint.ai