mastra/Guides
Deployment
Deploy Mastra to various platforms including cloud providers and web frameworks
deploymentclouddockerserverless
Deployment
Deploy Mastra to various platforms.
Topics
- Overview - Deployment options
- Mastra Server - Deploy as standalone server
- Monorepo - Deploy in a monorepo setup
- Cloud Providers - AWS, GCP, Azure, etc.
- Web Framework - Deploy with Next.js, etc.
- Workflow Runners - Background job processing
Deployment Overview
Mastra can be deployed to various platforms and environments.
Deployment Options
Self-hosted
- Your own servers
- Docker containers
- Kubernetes
Cloud Platforms
- AWS
- Google Cloud
- Azure
- Cloudflare Workers
- Vercel
- Netlify
Platform-as-a-Service
- Railway
- Render
- Fly.io
Choosing a Deployment
Consider:
- Scale - Expected traffic and concurrent requests
- Latency - Requirements for response time
- Compliance - Data residency and regulatory requirements
- Cost - Budget constraints
- Complexity - Operational overhead
Quick Deploy
npx mastra deploy
This will guide you through deployment options.
Mastra Server Deployment
Deploy Mastra as a standalone server.
Docker
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
docker build -t mastra-server .
docker run -p 3000:3000 mastra-server
Environment Variables
MASTRA_PORT=3000
MASTRA_LOG_LEVEL=info
MASTRA_AUTH_API_KEY=your-api-key
Health Checks
const server = new MastraServer({
healthCheck: {
path: '/health',
interval: 30,
},
});
Scaling
- Horizontal scaling with load balancer
- Redis for session storage
- Database for persistent state
Monorepo Deployment
Deploy Mastra within a monorepo structure.
Structure
monorepo/
├── apps/
│ ├── web/ # Next.js app
│ ├── api/ # Mastra server
│ └── worker/ # Background jobs
├── packages/
│ ├── ui/ # Shared UI
│ └── core/ # Shared logic
├── mastra.config.ts
└── package.json
Shared Configuration
// mastra.config.ts at root
import { defineConfig } from '@mastra/core';
export default defineConfig({
agents: {
// Shared agent definitions
},
tools: {
// Shared tool definitions
},
});
Building
# Build all packages
npm run build
# Or build just the API
npm run build --workspace=apps/api
Deployment
Deploy apps/api to your chosen platform while sharing packages with other apps.
Cloud Providers
Deploy Mastra to various cloud providers.
AWS
ECS/Fargate
const deployment = {
provider: 'aws',
service: 'ecs',
taskDefinition: {
cpu: '1024',
memory: '2048',
image: 'mastra-server:latest',
},
};
Lambda
For serverless, consider deploying specific agents as Lambda functions.
Google Cloud
Cloud Run
gcloud run deploy mastra-server \
--source . \
--region us-central1 \
--platform managed
Azure
Container Instances
az container create \
--resource-group mygroup \
--name mastra \
--image mastra-server:latest \
--ports 3000 \
--dns-name-label mastra-server
Cloudflare Workers
export default {
async fetch(request, env) {
const mastra = new MastraServer({ env });
return mastra.fetch(request);
},
};
Configuration
const server = new MastraServer({
// Provider-specific configuration
storage: {
type: 'cloudflare-d1',
accountId: env.CF_ACCOUNT_ID,
},
});
Web Framework Deployment
Deploy Mastra alongside a web framework.
Next.js
API Routes
// pages/api/mastra/[...path].ts
import { handleMastraRequest } from '@mastra/next';
export default handleMastraRequest({
mastra: myMastra,
});
Server Actions
'use server';
import { mastra } from '@/mastra';
export async function chat(formData: FormData) {
const agent = mastra.getAgent('myAgent');
const response = await agent.generate(formData.get('message'));
return response.text;
}
Express
import express from 'express';
import { createExpressAdapter } from '@mastra/server/express';
const app = express();
const adapter = createExpressAdapter({ mastra });
app.use('/api/mastra', adapter);
app.listen(3000);
Deployment
Vercel
// api/mastra.ts
import { handleMastraRequest } from '@mastra/vercel';
export default handleMastraRequest({
mastra: myMastra,
});
Netlify
// netlify/functions/mastra.ts
import { handleMastraRequest } from '@mastra/netlify';
export const handler = handleMastraRequest({
mastra: myMastra,
});
Workflow Runners
Background job processing for long-running workflows.
Why Workflow Runners?
- Handle long-running workflows asynchronously
- Prevent HTTP timeouts
- Process workflows in parallel
- Retry failed workflows
Supported Runners
- Inngest - Event-driven serverless functions
- Trigger.dev - Background jobs
- QStash - Message queue
- BullMQ - Redis-based queue
Inngest Integration
import { inngest } from '@mastra/inngest';
// Deploy as Inngest function
export const myWorkflowFunction = inngest.createFunction(
{ id: 'my-workflow' },
{ event: 'workflow/my-workflow' },
async ({ event, step }) => {
const result = await step.run('process', async () => {
return await myWorkflow.run({ input: event.data });
});
return result;
}
);
Trigger.dev Integration
import { TriggerClient } from '@mastra/trigger';
const client = new TriggerClient({ id: 'my-app' });
client.on('workflow-run', async (event) => {
const result = await myWorkflow.run(event.data);
});
Configuration
const server = new MastraServer({
workflowRunner: {
type: 'inngest',
options: {
eventKey: process.env.INNGEST_EVENT_KEY,
},
},
});