Cloudflare Databases and Storage
Serverless databases and storage solutions including D1, KV, Durable Objects, R2, Queues, and Vectorize
Cloudflare Databases and Storage
Serverless, globally distributed databases and storage solutions for building performant applications on Cloudflare's edge network.
D1
Create managed, serverless databases with SQL semantics.
Overview
D1 is a serverless SQL database built on SQLite, replicated globally for low-latency access from Cloudflare Workers and Pages.
Getting Started
- Create your first D1 database
- Define a schema
- Query from a Cloudflare Worker
Workers Binding API
Query D1 databases from a Cloudflare Worker:
- D1Database: Prepare statements, execute queries, batch operations, dump database
- Prepared Statement Methods: run, all, first, and raw methods
- Return Objects: D1Result and D1ExecResult objects
Wrangler Commands
Use Wrangler CLI commands to create, manage, and query D1 databases.
REST API
Manage and query D1 databases programmatically using the Cloudflare REST API.
Examples
- Query D1 from Hono, Remix, SvelteKit
- Export D1 database to R2 using Workflows
- Query D1 from Python Workers
Best Practices
- Import/Export Data: Import existing SQLite tables or export for local use
- Local Development: Run D1 locally with Wrangler
- Query a Database: SQL statements through Workers Binding API, REST API, or Wrangler
- Global Read Replication: Reduce read latency with multi-region replication
- Remote Development: Develop against D1 remotely using the dashboard playground
- Retry Queries: Handle transient errors with exponential backoff
- Use Indexes: Improve query performance with indexes
Configuration
- Data Location: Control where D1 stores data with location hints or jurisdiction constraints
- Environments: Configure separate databases for staging and production
Observability
- Audit Logs: Review audit log entries for configuration changes
- Billing: Track rows read, rows written, and storage usage
- Debug D1: Capture exceptions and log error messages
- Metrics and Analytics: Query volume, latency, and storage size
Platform
- Alpha Migration: Migrate D1 alpha databases to production storage
- Limits: Storage, queries, row sizes, and SQL statement limits
- Pricing: Based on rows read, rows written, and storage
Reference
- Backups (Legacy): Create and restore legacy snapshot-based backups
- Community Projects: ORMs, query builders, and tools
- Data Security: Encryption at rest and in transit, SOC 2 and ISO 27001
- FAQs: Pricing, limits, and usage questions
- Generated Columns: Virtual or stored generated columns
- Migrations: Version database schema using SQL migration files
- Time Travel: Restore to any minute within the last 30 days
Workers KV
Global, low-latency, key-value data storage.
Overview
Workers KV is a global, low-latency key-value store optimized for high-read, low-latency workloads.
Getting Started
- Create a KV namespace
- Write key-value pairs
- Read data from Workers KV
API
- Delete Key-Value Pairs: Remove keys using delete() method
- List Keys: Enumerate keys with pagination and prefix filtering
- Read Key-Value Pairs: Retrieve values using get() with type support, caching, metadata
- Write Key-Value Pairs: Store data using put() with expiration and metadata options
Concepts
- How KV Works: Central storage with global caching
- KV Bindings: Connect Workers to KV namespaces
- KV Namespaces: Key-value database replicated globally
Examples
- Cache data with Workers KV
- Build a distributed configuration store
- A/B testing with Workers KV
- Route requests across various web servers
- Store and retrieve static assets
Reference
- Data Security: Encryption at rest and in transit
- Environments: Bind different namespaces across environments
- FAQ: Consistency, rate limits, access methods
- Wrangler Commands: Manage namespaces, keys, bulk operations
Platform
- Event Subscriptions: Subscribe to KV changes with Queues
- Limits: Reads, writes, key size, value size, storage
- Pricing: Read, write, delete, list operations plus storage
Durable Objects
A special kind of Cloudflare Worker combining compute with storage.
Overview
Durable Objects provide globally unique, single-threaded compute instances with persistent storage.
Getting Started
- Create and deploy your first Durable Object
- Use SQLite storage
- Companion Worker integration
API
- Alarms: Schedule future wake-ups with guaranteed at-least-once execution
- Durable Object Base Class: Abstract base class and handler methods
- Durable Object Container: Access and manage containers
- Durable Object ID: 64-digit hex identifier
- SQLite Storage: SQL API and key-value methods
- Durable Object State: Concurrency, WebSocket attachment, storage access
- Durable Object Stub: Client to invoke RPC methods remotely
- WebGPU: GPU access in local development
Examples
- Use the Alarms API
- Build a counter with RPC methods
- Durable Object in-memory state
- Durable Object TTL with Alarms
- ReadableStream with Durable Object
- Use RpcTarget class for metadata
- Testing Durable Objects
- Use Workers KV from Durable Objects
- WebSocket server with Hibernation
- WebSocket server
Best Practices
- Access Storage: Read and write persistent data
- Invoke Methods: Call RPC methods or send fetch requests
- Error Handling: Retryable and overloaded errors
- Rules of Durable Objects: Design guidelines
- Use WebSockets: Standard and Hibernation APIs
Concepts
- Lifecycle: Creation, activation, request handling, eviction
- What are Durable Objects: Globally unique, single-threaded compute
Observability
- Data Studio: View and edit SQLite storage through dashboard
- Metrics and Analytics: Namespace and request-level metrics
- Troubleshooting: Debug with wrangler dev and wrangler tail
Platform
- Known Issues: Global uniqueness, code updates, development limitations
- Limits: Account, storage, CPU, and SQL limits
- Pricing: Compute and storage billing
Reference
- Data Location: Jurisdiction restrictions and location hints
- Data Security: Encryption properties and compliance
- Gradual Deployments: Deploy changes gradually
- Migrations: Configure class migrations
- Environments: Bindings across environments
R2
Store large amounts of unstructured data without egress fees.
Overview
R2 is a cost-effective, scalable object storage without egress fees, S3-compatible.
Getting Started
- Create your first R2 bucket
- Store objects using dashboard, S3-compatible tools, or Workers
- CLI: Wrangler, rclone, AWS CLI
- S3: boto3, AWS SDK
- Workers API: From Cloudflare Workers
Buckets
- Create Buckets: Dashboard or Wrangler CLI
- Bucket Locks: Retention policies to prevent deletion
- Configure CORS: Cross-Origin Resource Sharing policies
- Event Notifications: Send messages to Queues on object changes
- Object Lifecycles: Retention and storage class transitions
- Public Buckets: Expose via custom domain or r2.dev subdomain
- Storage Classes: Standard vs Infrequent Access
R2 Data Catalog
Managed Apache Iceberg data catalog built into R2:
- DuckDB, PyIceberg, Snowflake, Spark: Query Iceberg tables
- StarRocks, Apache Trino: Additional query engines
- Getting Started: Enable, load sample data, run queries
- Table Maintenance: Automated maintenance
R2 SQL
Serverless SQL interface for querying and analyzing R2 data.
Data Migration
- Super Slurper: One-off bulk copy from other providers
- Sippy: Incremental migration on-demand
- Migration Strategies: Combine for minimal downtime
Tutorials
- Protect R2 Bucket with Cloudflare Access
- Mastodon object storage configuration
- Postman integration
- Summarize PDF files on upload
- Log and store upload events
Platform
- Audit Logs: Configuration change tracking
- Event Subscriptions: R2 and Super Slurper events
- Limits: Account, bucket, object limits
- Metrics and Analytics: Storage and operations metrics
- Pricing: Storage, Class A/B operations
- Troubleshooting: CORS errors, 403 responses, cache behavior
API
- Error Codes: Reference for Workers API and S3-compatible API
- S3 API Compatibility: Operations and feature support status
- Extensions: Unicode metadata and custom headers
Vectorize
Vector database for AI applications.
Overview
Vectorize enables vector similarity search for building AI applications with embeddings.
Getting Started
- Create a Vectorize index
- Insert embeddings
- Query for similarity
API
- Insert Vectors: Add embeddings to index
- Query: Find similar vectors by distance
- Metadata Filtering: Filter results by metadata
Integrations
- Workers AI for generating embeddings
- AI Gateway for model routing
- RAG applications
Queues
Reliable message delivery for the Cloudflare developer platform.
Overview
Queues provides durable message queuing for asynchronous processing between Workers and services.
Getting Started
- Create a queue
- Produce messages
- Consume with Workers
Features
- Durability: Messages persisted until acknowledged
- Batch Processing: Process multiple messages at once
- Dead Letter Queues: Handle failed messages
- FIFO: First-in-first-out ordering
Pricing
Based on operations and storage.
Choosing a Storage Product
| Product | Use Case | Consistency | Querying | |---------|----------|-------------|----------| | KV | Configuration, flags, simple data | Eventual | Key lookup | | D1 | Relational data, SQL needed | Strong | SQL | | Durable Objects | State, counters, WebSockets | Strong | Key-value + SQL | | R2 | Files, blobs, large objects | Eventual | S3 API | | Vectorize | Embeddings, AI similarity search | Eventual | Vector similarity | | Queues | Message passing, async tasks | At-least-once | Consumer pattern |
Example: D1 Query from Worker
const result = await env.DB.prepare(
'SELECT * FROM users WHERE active = ?'
).bind(1).all();
return Response.json(result);
Example: KV Read/Write
// Write
await env.STORAGE.put('key', 'value', { expirationTtl: 86400 });
// Read
const value = await env.STORAGE.get('key', { type: 'text' });
Example: Durable Object Counter
export class Counter implements DurableObject {
async increment(amount: number) {
const current = await this.storage.get('count') || 0;
await this.storage.put('count', current + amount);
return current + amount;
}
}