Agents
Agents are the core building blocks of langcrew. They are autonomous AI-powered entities that can understand tasks, use tools, collaborate with humans, and work within crews to achieve complex goals.
Quick Start - Create an Agent
Section titled “Quick Start - Create an Agent”Get your first agent running in just 3 lines:
from langcrew import Agent
agent = Agent(role="Assistant", goal="Help users", backstory="Helpful AI")result = agent.invoke({"messages": []})What is an Agent?
Section titled “What is an Agent?”An Agent in langcrew represents a specialized AI entity with:
- Identity: Role, goal, and backstory that define its personality and expertise
- Capabilities: Tools and skills it can use to accomplish tasks
- Intelligence: LLM-powered reasoning and decision making
- Collaboration: Ability to work with other agents and humans
- Safety: Built-in guardrails and human oversight capabilities
Core Architecture
Section titled “Core Architecture”Agents are built around a flexible, modular architecture:
Design Philosophy
Section titled “Design Philosophy”LangCrew agents follow three core principles:
1. CrewAI Compatibility
- Drop-in replacement for CrewAI agents
- Familiar role/goal/backstory pattern
- Seamless migration path
2. Native Flexibility
- Custom prompts for specialized behavior
- Direct LangGraph integration
- Configurable execution strategies
3. Production-Ready
- Built-in safety with guardrails
- Human-in-the-loop support
- Enterprise-grade reliability
Agent Types
Section titled “Agent Types”CrewAI-Style Agents
Section titled “CrewAI-Style Agents”Perfect for collaborative workflows:
from langcrew import Agent
researcher = Agent( role="Research Specialist", goal="Find accurate and relevant information", backstory="Expert researcher with academic background", tools=[search_tool, web_scraper])Native Agents
Section titled “Native Agents”For custom behavior and specialized tasks:
from langchain_core.messages import SystemMessage
code_reviewer = Agent( prompt=SystemMessage(content="You are a senior code reviewer..."), tools=[code_analysis_tool], executor_type="react")Core Capabilities
Section titled “Core Capabilities”Tool Integration
Section titled “Tool Integration”Agents can use any LangChain-compatible tool:
from langcrew_tools.search.langchain_tools import WebSearchTool
agent = Agent( role="Analyst", tools=[WebSearchTool()])Memory & Context
Section titled “Memory & Context”Agents maintain conversation history and context:
agent = Agent( role="Assistant", memory=True # Enable conversation memory)Human Collaboration
Section titled “Human Collaboration”Enable human oversight when needed:
from langcrew.hitl import HITLConfig
agent = Agent( role="Decision Maker", hitl=HITLConfig( interrupt_before_tools=["critical_tool"] # Require approval for specific tools ))Agent Handoffs
Section titled “Agent Handoffs”Agents can delegate tasks to specialists:
coordinator = Agent( role="Project Coordinator", handoff_to=["developer", "tester", "reviewer"])When to Use Agents
Section titled “When to Use Agents”Agents are ideal for:
- Autonomous Task Execution - Tasks requiring reasoning and tool use
- Collaborative Workflows - Multi-agent crews working together
- Human-AI Collaboration - Workflows needing human oversight
- Specialized Expertise - Domain-specific knowledge and skills
- Complex Reasoning - Multi-step problem solving
- Tool-Rich Environments - Applications with many available tools
Advanced Configuration
Section titled “Advanced Configuration”MCP Server Integration
Section titled “MCP Server Integration”Connect agents to external data sources and tools via Model Context Protocol:
from langcrew import Agent
agent = Agent( role="Data Analyst", goal="Analyze data from multiple sources", backstory="Expert at connecting to various data systems", llm=llm, mcp_servers={ "github-mcp": { "url": "http://localhost:3000/sse", "transport": "sse" # Server-Sent Events }, "filesystem-mcp": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/data"], "transport": "stdio" # Standard I/O } }, mcp_tool_filter=["github_search_repos", "read_file"] # Optional filtering)MCP Transport Types:
- SSE (Server-Sent Events): HTTP-based streaming, ideal for remote servers
- stdio: Direct process communication, best for local tools
- streamable_http: HTTP streaming with chunked transfer encoding
Learn more: MCP Integration Guide
Memory Configuration
Section titled “Memory Configuration”Agents can maintain conversation history and cross-session knowledge:
from langcrew import Agent, MemoryConfigfrom langcrew.memory import LongTermMemoryConfig
agent = Agent( role="Personal Assistant", goal="Provide personalized assistance", backstory="Helpful assistant that remembers user preferences", memory=MemoryConfig( provider="sqlite", connection_string="sqlite:///agent_memory.db", long_term=LongTermMemoryConfig( enabled=True, app_id="my-assistant-v1", # Isolate memories by application index={"dims": 1536, "embed": "openai:text-embedding-3-small"} ) ))Memory Layers:
- Short-term: Conversation state within a session
- Long-term: Persistent knowledge across sessions
- User memory: Personal preferences and information
- App memory: Shared insights (experimental)
Learn more: Memory Guide
Executor Configuration
Section titled “Executor Configuration”LangCrew agents use the ReAct (Reasoning and Acting) execution strategy by default:
# Default configuration - executor_type is "react" by defaultagent = Agent( role="Problem Solver", llm=llm # executor_type="react" is implicit - no need to specify)ReAct Executor Features:
- Reasoning: Agents think through problems before taking action
- Acting: Execute tools based on reasoning
- Observing: Analyze results and adapt approach
- Iterative: Continues until task is complete or max iterations reached
Note: ReAct is currently the only supported executor type and is applied automatically. You don’t need to specify executor_type parameter unless you want to be explicit.
Custom Prompts
Section titled “Custom Prompts”Override the default system prompt for specialized behavior:
from langchain_core.messages import SystemMessage
agent = Agent( role="Code Reviewer", prompt=SystemMessage(content="""You are a senior code reviewer. Always check for: - Code quality and maintainability - Security vulnerabilities - Performance issues - Best practices adherence"""), llm=llm)Context Management
Section titled “Context Management”Automatically compress conversation context when token limits are reached:
from langcrew import Agentfrom langcrew.context import ContextConfig, SummaryConfig
agent = Agent( role="Long Conversation Agent", goal="Maintain context in long conversations", backstory="Expert at handling extended dialogues", context_config=ContextConfig( pre_model=SummaryConfig( compression_threshold=150000, # Trigger summarization at this token limit keep_recent_tokens=64000 # Keep recent messages within this budget ) ), llm=llm)Available Strategies:
from langcrew.context import ( ContextConfig, KeepLastConfig, SummaryConfig, CompressToolsConfig, ToolCallCompressor)
# Keep last N messages (simplest strategy)config = ContextConfig( pre_model=KeepLastConfig(keep_last=25))
# Summarize old messages (best for long conversations)config = ContextConfig( pre_model=SummaryConfig( compression_threshold=150000, keep_recent_tokens=64000 ))
# Compress tool call outputs (for tool-heavy workflows)compressor = ToolCallCompressor(tools=['web_search'], max_length=500)config = ContextConfig( pre_model=CompressToolsConfig(compressor=compressor))Guardrails
Section titled “Guardrails”Add input/output validation for safety and compliance:
from langcrew import Agentfrom langcrew.guardrail import input_guard, output_guard
@input_guarddef validate_input(data): """Validate incoming requests""" if "sensitive_keyword" in str(data).lower(): return False, "Sensitive content detected" return True, ""
@output_guarddef validate_output(data): """Validate agent responses""" if len(str(data)) < 10: return False, "Response too short" return True, ""
agent = Agent( role="Safe Agent", input_guards=[validate_input], output_guards=[validate_output], llm=llm)Human-in-the-Loop (HITL)
Section titled “Human-in-the-Loop (HITL)”Enable human oversight for critical decisions:
from langcrew import Agentfrom langcrew.hitl import HITLConfig
agent = Agent( role="Decision Maker", goal="Make important decisions", backstory="Requires human approval for critical actions", hitl=HITLConfig( interrupt_before_tools=["send_email", "make_payment"] # Approve before using these tools ), llm=llm)Learn more: HITL Guide
Pre/Post Model Hooks
Section titled “Pre/Post Model Hooks”Inject custom logic before and after LLM calls:
from langcrew import Agent
def pre_hook(state): """Log before LLM call""" print(f"About to call LLM with: {state}") return state
def post_hook(response): """Process LLM response""" print(f"LLM responded: {response}") return response
agent = Agent( role="Monitored Agent", pre_model_hook=pre_hook, post_model_hook=post_hook, llm=llm)Integration with Crews
Section titled “Integration with Crews”Agents work best when organized into crews for collaborative workflows:
from langcrew import Agent, Crew
# Create specialized agentsresearcher = Agent(role="Researcher", goal="Gather information", llm=llm)writer = Agent(role="Writer", goal="Create content", llm=llm)
# Organize into a crewcrew = Crew(agents=[researcher, writer])result = crew.kickoff(inputs={"task": "Analyze market data"})