Core Foundation
Core
Agent Class
Extend the base Agent class to build autonomous AI agents with built-in lifecycle hooks, HTTP handling, and WebSocket support.
Core
Calling Agents
Route requests to named Agent instances using getAgentByName. Agents are addressable, persistent micro-servers that run your code.
Core
Configuration
Wire up Agents via wrangler.toml/jsonc with Durable Object bindings, SQLite migrations, and environment variables.
Real-Time Communication
Real-Time
WebSockets
Persistent bidirectional connections with onConnect, onMessage, and onClose hooks. Stream live updates back to clients in real-time.
Real-Time
HTTP & Server-Sent Events
Handle standard HTTP requests via onRequest and stream incremental LLM responses or progress events to clients using SSE.
Real-Time
State Sync
Automatically sync agent state to connected clients using setState and the useAgent React hook. Zero-config reactive state.
AI & Models
AI
Using AI Models
Call any LLM — OpenAI, Anthropic, Workers AI — using the AI SDK or native fetch. Stream tokens directly to WebSocket or SSE clients.
AI
RAG — Retrieval Augmented
Embed documents into Cloudflare Vectorize, run similarity search, and inject relevant context into LLM prompts for accurate, grounded answers.
State & Data
Data
Built-in SQLite
Every Agent has its own embedded SQLite database accessible via this.sql. Run queries, store structured data, and build memory into your agents.
Data
KV State Management
Use this.setState and this.getState for fast key-value state storage. Changes auto-broadcast to all connected WebSocket clients.
Async & Task Management
Async
Task Scheduling
Schedule one-time or recurring tasks with this.schedule using cron syntax or delay-based timing. Agents run exactly when needed.
Async
Workflows
Run stateful multi-step workflows that guarantee execution with automatic retries. Workflows survive failures and can run for hours or days.
Async
Web Browsing
Integrate headless browser services to let agents fetch, scrape, and interact with web pages as part of autonomous research or data-gathering tasks.
Agent Patterns
Pattern
Prompt Chaining
Sequential LLM calls where each step refines the previous output — with quality gates to trigger rework until standards are met.
Pattern
Routing
Classify inputs and dynamically route to specialized downstream agents or models — simple queries to mini models, complex ones to powerful models.
Pattern
Parallelization
Run multiple LLM calls simultaneously with Promise.all for independent sub-tasks. Aggregate specialist results for higher-quality output.
Pattern
Orchestrator-Workers
A planning LLM breaks work into sub-tasks and delegates to specialized worker LLMs. Results are synthesized into a final coherent output.
Pattern
Evaluator-Optimizer
A generator LLM produces output while an evaluator LLM scores quality and provides feedback — iterating until the bar is cleared.
Integrations
Integration
Model Context Protocol
Build and deploy remote MCP servers on Cloudflare. Expose tools, resources, and prompts to any MCP-compatible AI client like Claude or Cursor.
Integration
Human-in-the-Loop
Pause agent execution to request human approval or input before proceeding with sensitive or irreversible actions. Await confirmation via WebSocket or HTTP.
Integration
x402 Payments
Enable agents to autonomously pay for API access using the x402 standard over HTTP 402. No accounts or credential management needed.