About
apflow is a high-performance distributed task orchestration framework that scales from a single process to massive multi-node clusters. It provides a unified execution interface for 12+ built-in executors (HTTP, SSH, Docker, gRPC, MCP, LLM Agents) with automatic leader election, lease-based task ownership, and horizontal scaling. The framework includes a real-time GraphQL API with WebSocket subscriptions for live task tracking, a pluggable protocol registry (A2A, MCP, GraphQL), and flexible storage options (DuckDB for local, PostgreSQL for distributed). Built for the AI-native era, it seamlessly integrates with CrewAI and LLM-based task tree generation.
Features
Workflow Examples
Visualize how tasks are organized in trees and how dependencies control execution order
Sequential Pipeline with Task Tree
Demonstrates both task tree organization (parent-child) and execution dependencies. The tree organizes tasks hierarchically, while dependencies control when tasks execute.
Get Started
Python Implementation
High-performance distributed task orchestration framework.
pip install apflow[standard]from apflow.core.builders import TaskBuilder
from apflow import TaskManager, create_session
# Initialize task manager
db = create_session()
task_manager = TaskManager(db)
# Use TaskBuilder for clean task creation and execution
result = await (
TaskBuilder(task_manager, "rest_executor")
.with_name("fetch_data")
.with_input("url", "https://api.example.com/data")
.with_input("method", "GET")
.execute()
)
print(f"Result: {result.result}")