Skip to content

Crews

Crews are the orchestration layer in langcrew that brings agents and tasks together. Built on LangGraph, they manage complex workflows with intelligent routing, memory systems, and dynamic execution patterns.

Get your first crew running in just 3 lines:

from langcrew import Agent, Task, Crew
agents = [Agent(role="Researcher", goal="Gather data"), Agent(role="Writer", goal="Create content")]
tasks = [
Task(agent=agents[0], description="Research AI trends", expected_output="Research findings"),
Task(agent=agents[1], description="Write report", expected_output="Written report")
]
crew = Crew(agents=agents, tasks=tasks)
result = crew.kickoff()

A Crew in langcrew represents an intelligent orchestration system with:

  • Multi-Agent Coordination: Multiple agents working together seamlessly
  • Workflow Management: Automatic task sequencing and dependency resolution
  • Context Flow: Information flows between agents and tasks automatically
  • Memory Systems: Persistent memory across conversations and sessions
  • Dynamic Routing: Intelligent handoffs between agents and tasks
  • Human Integration: Built-in human-in-the-loop capabilities

Crews orchestrate the execution of tasks by agents with intelligent coordination:

Crew Architecture

LangCrew crews follow three orchestration principles:

1. Intelligent Coordination

  • Automatic task sequencing based on dependencies
  • Dynamic agent selection and routing
  • Context-aware information flow

2. Stateful Execution

  • Memory persistence across sessions
  • Context accumulation and sharing
  • Conversation continuity

3. Human-AI Collaboration

  • Seamless human oversight integration
  • Approval workflows and interventions
  • Interactive decision making

Agents and tasks execute in dependency order:

from langcrew import Agent, Task, Crew
# Create workflow
researcher = Agent(role="Researcher", goal="Gather information")
analyst = Agent(role="Analyst", goal="Analyze data")
writer = Agent(role="Writer", goal="Create reports")
research_task = Task(agent=researcher, description="Research market trends", expected_output="Research findings", name="research")
analysis_task = Task(agent=analyst, description="Analyze findings", expected_output="Analysis results", context=[research_task], name="analysis")
report_task = Task(agent=writer, description="Write final report", expected_output="Written report", context=[analysis_task], name="report")
crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, report_task]
)

When no tasks are specified, agents work sequentially:

from langchain_core.messages import HumanMessage
researcher = Agent(role="Researcher", goal="Gather information")
analyst = Agent(role="Analyst", goal="Analyze data")
writer = Agent(role="Writer", goal="Create reports")
crew = Crew(
agents=[researcher, analyst, writer] # No tasks - agents process input in sequence
)
# Each agent processes the user input in order
result = crew.invoke({"messages": [HumanMessage(content="Analyze market trends")]})

Agents can transfer control to specialists:

# Configure agent handoffs
coordinator = Agent(
role="Coordinator",
goal="Route work to specialists",
handoff_to=["specialist", "reviewer"], # Can transfer to these agents
is_entry=True, # Entry point for workflow
name="coordinator"
)
specialist = Agent(role="Domain Specialist", name="specialist")
reviewer = Agent(role="Quality Reviewer", name="reviewer")
crew = Crew(agents=[coordinator, specialist, reviewer])

Crews maintain persistent memory across conversations:

assistant = Agent(role="Assistant", goal="Help users")
support_task = Task(agent=assistant, description="Answer user questions", expected_output="Helpful response")
# Enable basic memory
crew = Crew(
agents=[assistant],
tasks=[support_task],
memory=True
)
# Memory persists across sessions
result1 = crew.kickoff(config={"configurable": {"thread_id": "conversation_1"}})
result2 = crew.kickoff(config={"configurable": {"thread_id": "conversation_1"}}) # Remembers previous context

Enable human oversight when needed:

from langcrew.hitl import HITLConfig
decision_agent = Agent(role="Decision Maker", goal="Make critical decisions")
critical_task = Task(agent=decision_agent, description="Review and approve action", expected_output="Approval decision")
crew = Crew(
agents=[decision_agent],
tasks=[critical_task],
hitl=HITLConfig(
interrupt_before_tools=["critical_operation"] # Human approval required for specific tools
)
)

Automatic context flow between agents and tasks:

# Context flows automatically through dependencies
collector = Agent(role="Data Collector", goal="Collect information")
processor = Agent(role="Data Processor", goal="Process information")
reporter = Agent(role="Reporter", goal="Generate reports")
tasks = [
Task(agent=collector, description="Collect data", expected_output="Collected data", name="collect"),
Task(agent=processor, description="Process data", expected_output="Processed data", context=["collect"], name="process"),
Task(agent=reporter, description="Generate report", expected_output="Final report", context=["process"], name="report")
]
crew = Crew(agents=[collector, processor, reporter], tasks=tasks)

Crews are ideal for:

  • Multi-Step Workflows - Complex processes requiring multiple agents
  • Collaborative Tasks - Work that benefits from different expertise
  • Stateful Conversations - Applications needing memory and context
  • Quality-Controlled Processes - Workflows requiring review and approval
  • Dynamic Routing - Processes where work flow depends on content
  • Human-AI Collaboration - Applications requiring human oversight
# Simple execution
result = crew.kickoff()
# With inputs
result = crew.kickoff(inputs={"topic": "AI trends", "deadline": "2024-12-31"})
# Persistent conversations
result = crew.kickoff(
inputs={"query": "Analyze this data"},
config={"configurable": {"thread_id": "project_123"}} # Maintains conversation history
)
from langchain_core.messages import HumanMessage
# Stream crew execution for real-time monitoring
for chunk in crew.stream(input={"messages": [HumanMessage(content="Process request")]}):
print(f"Step: {chunk}")
researcher = Agent(role="Researcher", goal="Gather data")
analyst = Agent(role="Analyst", goal="Analyze findings")
writer = Agent(role="Writer", goal="Create reports")
crew = Crew(
agents=[researcher, analyst, writer],
tasks=[
Task(agent=researcher, description="Research topic", expected_output="Research findings", name="research"),
Task(agent=analyst, description="Analyze data", expected_output="Analysis results", context=["research"], name="analyze"),
Task(agent=writer, description="Write report", expected_output="Written report", context=["analyze"], name="report")
]
)
author = Agent(role="Author", goal="Create content")
reviewer = Agent(role="Reviewer", goal="Review quality")
approver = Agent(role="Approver", goal="Final approval")
crew = Crew(
agents=[author, reviewer, approver],
tasks=[
Task(agent=author, description="Create draft", expected_output="Draft document", name="draft"),
Task(agent=reviewer, description="Review content", expected_output="Review feedback", context=["draft"], name="review"),
Task(agent=approver, description="Final approval", expected_output="Approval decision", context=["review"], name="approve")
]
)