mastra/Insights

Mastra Insights

Key learnings and patterns for Mastra

insightspatternsgotchassetup

Mastra Insights

Key Learnings

Agent Setup

  1. Agents as Core - Define with instructions, model, tools
  2. Memory System - Built-in memory for conversation context
  3. Tools & Workflows - Extend agents with custom functionality

Patterns

const agent = createAgent({
  name: 'myAgent',
  instructions: 'You are a helpful assistant',
  model: openai('gpt-4'),
  tools: [weatherTool, searchTool],
});

Gotchas

  • Cloudflare Workers: per-request instantiation required
  • Memory adapter affects persistence
  • Tool definitions must be complete

Mastra Local Setup - Key Learnings

Overview

Setting up a complete Mastra turborepo sample with PostgreSQL, pgvector, Ollama, and Next.js.

Key Technical Findings

1. Semantic Recall Requires Vector Store

Problem: Semantic recall requires a vector store to be configured error

Solution: Memory needs both vector AND embedder configured:

const vector = new PgVector({ id: 'mastra-vector', connectionString });
const embedder = ollamaEmbedder.embedding('nomic-embed-text');

new Memory({
  storage,
  vector,      // Required for semantic recall
  embedder,   // Required for embeddings
  options: {
    semanticRecall: { topK: 5, messageRange: 2 }
  }
});

2. Ollama Embeddings via OpenAI SDK Compatibility

Finding: Ollama's nomic-embed-text can be used as an embedder via OpenAI SDK's createOpenAI with custom baseURL.

const ollamaEmbedder = createOpenAI({
  baseURL: `${OLLAMA_BASE_URL}/v1`,
});
const embedder = ollamaEmbedder.embedding('nomic-embed-text');

Requirements:

  • Pull nomic-embed-text model: ollama pull nomic-embed-text
  • Use same base URL as chat model

3. Docker Port Conflicts

Problem: Port 5432 often occupied by OrbStack

Solution: Use alternative port (5434) and update DATABASE_URL accordingly:

ports:
  - "5434:5432"

4. pgvector Image Choice

Finding: Use pgvector/pgvector:pg16 not pgvectorai/pgvector

image: pgvector/pgvector:pg16

5. Semantic Recall Disabled by Default

For local-only setups without an embedder, semantic recall should be disabled:

options: {
  // semanticRecall disabled - requires embedder
  workingMemory: { enabled: true, schema: WorkingMemorySchema }
}

Architecture

Next.js (AI SDK v6) → @mastra/ai-sdk → Mastra Agent
                                          ↓
                                     Memory + PgVector
                                          ↓
                               nomic-embed-text embeddings
                                          ↓
                                     Ollama LLM

Dependencies

  • Chat: gemma4 (Ollama)
  • Embeddings: nomic-embed-text (Ollama)
  • Storage: PostgreSQL 16 + pgvector
  • Frontend: Next.js 15 + AI SDK v6
  • Mastra: @mastra/core, @mastra/memory, @mastra/pg

Common Issues

  1. Port conflicts - Check what's using ports before starting Docker
  2. Missing embedder - Must have embedding model for semantic recall
  3. Build artifacts - Never commit .next/ folders
  4. Lazy initialization - Use lazy mastra init to avoid DATABASE_URL errors at build time

AI SDK v6 Common Gotchas

1. useChat from @ai-sdk/react Requires DefaultChatTransport

Problem: useChat API has changed significantly. The api property does not exist directly on UseChatOptions.

Solution: Use DefaultChatTransport from the ai package:

import { useChat } from '@ai-sdk/react';
import { DefaultChatTransport } from 'ai';

const transport = new DefaultChatTransport({ api: '/api/chat' });

const { messages, sendMessage, status } = useChat({ transport });

2. isLoading Not Available on UseChatHelpers

Problem: isLoading property does not exist on UseChatHelpers type.

Solution: Derive isLoading from status:

// status is 'submitted' | 'ready' | 'error'
const isLoading = status === 'submitted';

3. ai/react Subpath Doesn't Exist

Problem: Trying to import from ai/react fails - module not found.

Solution: Import useChat from @ai-sdk/react instead:

import { useChat } from '@ai-sdk/react';

4. Messages are UIMessage Type with Parts

Problem: Messages returned from useChat are UIMessage type, not plain strings.

Solution: Extract text from message parts:

function getMessageText(message: UIMessage): string {
  return message.parts
    .filter((part) => part.type === 'text')
    .map((part) => part.text)
    .join('');
}

5. handleChatStream Does Not Have version Property

Problem: version: 'v6' does not exist on handleChatStream options.

Solution: Remove the version property - the API format is handled automatically:

const stream = await handleChatStream({
  mastra: getMastraInstance(),
  agentId,
  params: { messages },
});

Tailwind CSS v4 Theming

1. Use @theme Directive Instead of CSS Variables

Problem: CSS variables in :root with --color-* names don't work in Tailwind v4.

Solution: Use @theme directive for custom colors:

@import "tailwindcss";

@theme {
  --color-background: #0a0a0a;
  --color-foreground: #fafafa;
  --color-primary: #a3e635;
  --color-primary-foreground: #0a0a0a;
  /* ... */
}

2. Border Radius Uses --radius-* Variables

Problem: borderRadius theme extension doesn't work with CSS variables directly.

Solution: Define radius in @theme:

@theme {
  --radius-lg: 0.75rem;
  --radius-md: calc(0.75rem - 2px);
  --radius-sm: calc(0.75rem - 4px);
}

3. Remove tailwind.config.ts for Simple Themes

Problem: For simple theme changes, maintaining both CSS and config files is redundant.

Solution: Move all theme config to CSS @theme block. The config file is only needed for custom plugins.

Thread-Based Chat Implementation

1. useChat Doesn't Auto-Reload on Transport Change

Problem: When switching threads, useChat doesn't automatically fetch new messages.

Solution: Use React key prop to force re-mount:

<ChatSection key={activeThreadId || 'default'} threadId={activeThreadId} />

2. Loading Thread Messages Requires Custom API

Problem: DefaultChatTransport only handles sending messages, not fetching history.

Solution: Create a /api/messages endpoint:

// Fetch messages from Mastra memory
const recallResult = await memory.recall({
  threadId: threadId!,
  perPage: 20,
  threadConfig: { lastMessages: 20 },
});

3. URL Persistence for Thread Selection

Problem: Thread selection isn't preserved on page reload.

Solution: Use useSearchParams with Suspense boundary:

'use client';
import { useSearchParams, Suspense } from 'next/navigation';

function ChatContent() {
  const searchParams = useSearchParams();
  const threadId = searchParams.get('thread');
  // ...
}

export default function Page() {
  return <Suspense fallback={<Loading />}><ChatContent /></Suspense>;
}

4. recall() Uses threadConfig.lastMessages Not Direct Parameter

Problem: Trying to pass lastMessages directly to recall() causes type error.

Solution: Pass through threadConfig:

const result = await memory.recall({
  threadId: threadId!,
  perPage: 20,
  threadConfig: { lastMessages: 20 },
});

5. Passing threadId to handleChatStream for Memory Persistence

Problem: Messages not saved to memory - threadId not passed to agent execution.

Root Cause: handleChatStream requires threadId via the memory option, not as a direct parameter.

Solution: Pass threadId via params.memory.thread:

const stream = await handleChatStream({
  mastra: getMastraInstance(),
  agentId,
  params: {
    messages,
    memory: {
      thread: threadId,
      resource: 'user-id-from-auth',
    },
  },
});

Why This Works: Mastra v1 migrated threadId from direct agent options to the memory option. The memory.thread associates the conversation with a thread for persistence. resourceId is required when using memory.

6. Auto-Scroll to Latest Message in Chat

Problem: When new messages appear, chat doesn't scroll to show them.

Solution: Use a ref and scrollIntoView:

const messagesEndRef = useRef<HTMLDivElement>(null);

useEffect(() => {
  messagesEndRef.current?.scrollIntoView({ behavior: 'smooth' });
}, [messages]);

// In JSX, add ref to a div at the end of messages:
// <div ref={messagesEndRef} />

Note: There's no built-in config in useChat for auto-scroll - manual implementation required.

7. Full-Screen Height Layout

Problem: Sidebar and chat don't use full viewport height.

Solution: Use h-screen instead of min-h-screen on the root container:

<div className="relative h-screen overflow-hidden bg-background">
  <main className="relative z-10 h-full flex">
    <ThreadSidebar />
    {/* content */}
  </main>
</div>

8. Keyboard Shortcuts - Enter to Send, Shift+Enter for Newline

Problem: Input always submits on Enter, can't create multi-line messages.

Solution: Handle onKeyDown to distinguish Enter vs Shift+Enter:

const handleKeyDown = (e: React.KeyboardEvent<HTMLInputElement>) => {
  if (e.key === 'Enter' && !e.shiftKey) {
    e.preventDefault();
    sendMessage({ text: input });
    setInput('');
  }
};

9. Delete Thread Functionality

Problem: No way to delete threads.

Solution: Add DELETE endpoint and trash icon button:

// API
await memory.deleteThread(threadId);

// UI - add delete button with confirmation
<button onClick={(e) => {
  e.stopPropagation();
  if (confirm('Delete this thread?')) {
    deleteThread(threadId);
  }
}}>
  <TrashIcon />
</button>

10. Code Syntax Highlighting in Chat

Problem: Code blocks render without syntax highlighting.

Solution: Use react-syntax-highlighter with custom code component:

import { Prism as SyntaxHighlighter } from 'react-syntax-highlighter';
import { oneDark } from 'react-syntax-highlighter/dist/esm/styles/prism';

<ReactMarkdown
  components={{
    code({ className, children }) {
      const match = /language-(\w+)/.exec(className || '');
      if (!match) return <code className={className}>{children}</code>;
      return (
        <SyntaxHighlighter style={oneDark} language={match[1]}>
          {String(children).replace(/\n$/, '')}
        </SyntaxHighlighter>
      );
    },
  }}
>
  {messageText}
</ReactMarkdown>

11. Error Stack Traces in API Responses

Problem: API returns 500 without details, hard to debug.

Solution: Wrap route handler and return error details:

try {
  // ... handler
} catch (error) {
  console.error('Chat API error:', error);
  const message = error instanceof Error ? error.message : String(error);
  const stack = error instanceof Error ? error.stack : undefined;
  return Response.json({ error: message, stack }, { status: 500 });
}