Subprocessors

This page describes the third-party subprocessors that Stackmint, Inc. may engage to support delivery of the Stackmint platform and related services.

Last updated: November 30, 2025. Stackmint may update this list from time to time. Where required by contract or applicable law, customers will be notified of material changes.

1. Core Infrastructure

Used to host and operate the Stackmint platform.

SubprocessorServiceLocation
RenderApplication hosting and runtime infrastructureUnited States / EU
Amazon Web Services (AWS)Infrastructure and compute hosting for Bud runtime, execution metadata, and logsEU and US (eu-west-1, us-west-2)
SupabaseManaged PostgreSQL database, authentication, and storageEU or US (customer-dependent)

2. Billing and Payments

Used to process subscription fees and marketplace payments.

SubprocessorServiceLocation
StripePayment processing and billingGlobal (EU / US)

3. Communications

Used for transactional emails and customer communications.

SubprocessorServiceLocation
ResendTransactional email deliveryUnited States

4. Optional AI Model and Inference Providers

Stackmint can be configured by customers to route workloads to multiple AI model providers. These providers are only used where explicitly enabled by the customer.

  • OpenAI — hosted large language models
  • Mistral AI — hosted large language models
  • Hugging Face — inference endpoints and model hub
  • Anthropic — hosted large language models
  • Other providers as configured by the customer

5. Optional Customer-Activated Integrations

Customers may connect Stackmint to third-party systems (Salesforce, Slack, Microsoft Teams, Google Workspace, HubSpot, Workday, and others). In those cases, Stackmint acts as a processor, orchestrating data that already resides in those systems.

These integrations are activated and configured directly by the customer. The third-party providers remain independent data controllers or processors under their own terms.