AI Governance & Transparency
Stackmint provides an execution layer for AI-powered workflows and agents. We give customers the tools to design, monitor, and govern their own agents, while maintaining transparency about how the platform operates.
1. Platform Role
Stackmint is a general-purpose orchestration and execution platform. Customers define what their agents do by configuring Buds, Branches, model providers, and integrations.
Stackmint does not independently decide the purposes of processing end-user data: those decisions belong to the customer, who remains responsible for assessing risk and complying with applicable AI and sector-specific regulations.
2. Transparency and Logging
Stackmint is designed to support auditability and transparency:
- Execution-level logs and run identifiers;
- Visibility into which Buds and Branches were executed;
- Tracing of calls to external systems and model providers;
- Configurable access controls for logs and outputs.
These capabilities help customers demonstrate how their agents behave and support compliance with emerging AI regulations and internal governance policies.
3. Data Usage and Model Training
Stackmint does not use customer data to train proprietary foundation models. Any use of customer data for model fine-tuning or evaluation would require explicit agreement with the customer and clear documentation.
Customers remain responsible for their own model choice, prompt design, and use of agents in higher-risk or regulated contexts.
4. Contact
For questions about AI governance, risk, or responsible use of agents built on Stackmint, please contact: