Supported Models & Providers

Stackmint is model-agnostic. We orchestrate agents across multiple foundation models, inference platforms, cloud runtimes, and developer tools — so you can choose the right stack for every workflow: performance, cost, compliance, or on-premise constraints.

Using these logos does not imply a formal partnership or endorsement. They represent providers that Stackmint can integrate with based on publicly available APIs and customer configurations.

Foundation Models

openaiOpenAI logo
OpenAI
Foundation Model Provider
anthropicAnthropic logo
Anthropic
Foundation Model Provider
claudeClaude logo
Claude
Foundation Model Provider
mistralMistral AI logo
Mistral AI
Foundation Model Provider
deepseekDeepSeek logo
DeepSeek
Foundation Model Provider
qwenQwen logo
Qwen
Foundation Model Provider
zhipuZhipu logo
Zhipu
Foundation Model Provider
gemmaGemma (Google) logo
Gemma (Google)
Foundation Model Provider
grokGrok logo
Grok
Foundation Model Provider
groqGroq logo
Groq
Foundation Model Provider
perplexityPerplexity logo
Perplexity
Foundation Model Provider
meta-llamaMeta Llama logo
Meta Llama
Foundation Model Provider

Multimodal & Creative Models

midjourneyMidjourney logo
Midjourney
Multimodal / Vision / Creative
stabilityStability logo
Stability
Multimodal / Vision / Creative
runwayRunway logo
Runway
Multimodal / Vision / Creative
lumaLuma logo
Luma
Multimodal / Vision / Creative
dreammachineDream Machine logo
Dream Machine
Multimodal / Vision / Creative
sunoSuno logo
Suno
Multimodal / Vision / Creative
ideogramIdeogram logo
Ideogram
Multimodal / Vision / Creative
soraSora logo
Sora
Multimodal / Vision / Creative

Inference & Model Hubs

huggingfaceHugging Face logo
Hugging Face
Inference & Model Hub
hf-inferenceHugging Face Inference logo
Hugging Face Inference
Inference & Model Hub
replicateReplicate logo
Replicate
Inference & Model Hub
cohereCohere logo
Cohere
Inference & Model Hub

Cloud / Infra Providers

awsAWS logo
AWS
Cloud / Infrastructure
aws-bedrockAWS Bedrock logo
AWS Bedrock
Cloud / Infrastructure
azureaiAzure AI logo
Azure AI
Cloud / Infrastructure
microsoftMicrosoft logo
Microsoft
Cloud / Infrastructure
googleGoogle Cloud logo
Google Cloud
Cloud / Infrastructure
google-geminiGoogle Gemini logo
Google Gemini
Cloud / Infrastructure
google-vertexGoogle Vertex AI logo
Google Vertex AI
Cloud / Infrastructure
nvidiaNVIDIA logo
NVIDIA
Cloud / Infrastructure
cloudflareCloudflare logo
Cloudflare
Cloud / Infrastructure
workersaiCloudflare Workers AI logo
Cloudflare Workers AI
Cloud / Infrastructure
snowflake-colorSnowflake logo
Snowflake
Cloud / Infrastructure
vercelVercel logo
Vercel
Cloud / Infrastructure
v0v0.dev logo
v0.dev
Cloud / Infrastructure

Developer & Workflow Tools

langchainLangChain logo
LangChain
Developer Tools & Orchestration
langgraphLangGraph logo
LangGraph
Developer Tools & Orchestration
langfuseLangfuse logo
Langfuse
Developer Tools & Orchestration
llamaindexLlamaIndex logo
LlamaIndex
Developer Tools & Orchestration
mcpMCP logo
MCP
Developer Tools & Orchestration

Why Multi-Model Support Matters

Different workflows need different models. Some require top-tier reasoning, others need real-time latency, GPU-optimized inference, multimodal capabilities, or regional deployments. Stackmint lets you route agents to the best provider without rewriting Buds or Branches.

You can enable or disable providers per tenant or workspace. Stackmint handles orchestration, observability, retries, and consistency — while you stay in full control of your model and infrastructure choices.

Need a Provider We Don’t List Yet?

We add new providers frequently based on customer demand. If you’d like Stackmint to support any additional model, inference endpoint, or self-hosted runtime (including vLLM, TGI, SGLang, Ollama, or custom deployments), reach out:

builders@stackmint.ai