Getting Started
Quick guide to get Web services working in your langcrew application.
LangCrew’s Web module transforms your AI agents into production-ready web services with real-time streaming communication.
Quick Start - Just 3 Lines
Section titled “Quick Start - Just 3 Lines”Get your agent running as a web service in just 3 lines:
from langcrew import Agent, Crewfrom langcrew.web import create_server
agent = Agent(role="Assistant", goal="Help users", backstory="Helpful AI")server = create_server(Crew(agents=[agent]))server.run(port=8000) # Visit http://localhost:8000/docsThat’s it! Your agent is now available as a web service with automatic API documentation.
Installation
Section titled “Installation”Web services are included in langcrew - no additional installation required:
uv add langcrew --prerelease=allowDetailed Examples
Section titled “Detailed Examples”Here are more detailed examples for different use cases:
Method 1: Using LangCrew
Section titled “Method 1: Using LangCrew”from langcrew import Agent, Crewfrom langcrew.web import create_server
# Create agent and crewagent = Agent( role="Web Assistant", goal="Help users through web interface", backstory="You are a helpful web-based AI assistant", verbose=True)
crew = Crew(agents=[agent])
# Create and start serverserver = create_server(crew)
# Start serverif __name__ == "__main__": print("Starting web server at http://0.0.0.0:8000") server.run(host="0.0.0.0", port=8000)Method 2: Using LangGraph
Section titled “Method 2: Using LangGraph”from langcrew.web import create_langgraph_serverfrom langgraph.graph import StateGraph
# Create your LangGraph workflowdef create_workflow(): workflow = StateGraph() # Add your nodes and edges here return workflow.compile()
# Create server with LangGraphcompiled_graph = create_workflow()server = create_langgraph_server(compiled_graph)
# Start serverif __name__ == "__main__": print("Starting LangGraph web server at http://0.0.0.0:8000") server.run(host="0.0.0.0", port=8000)Testing Your Server
Section titled “Testing Your Server”Once your server is running, you can:
1. Use the Built-in Web UI
Section titled “1. Use the Built-in Web UI”LangCrew provides a ready-to-use React-based web interface:
# Navigate to the web directorycd web/
# Install dependenciespnpm install
# Start the development serverpnpm dev
# Open your browser to http://localhost:3600/chatThe web UI includes:
- Real-time Chat Interface: Stream responses with typing indicators
- Tool Call Visualization: See agent tool usage in real-time
- File Upload Support: Upload documents for analysis
- Session Management: Maintain conversation history
- Modern React Components: Built with Antd + Tailwind CSS
2. Access API Documentation
Section titled “2. Access API Documentation”Visit http://localhost:8000/docs to see the auto-generated API documentation
3. Send Test Messages
Section titled “3. Send Test Messages”# Start new conversation (no session_id)curl -X POST "http://localhost:8000/api/v1/chat" \ -H "Content-Type: application/json" \ -d '{"message": "Hello, how can you help me?"}'
# Continue existing conversationcurl -X POST "http://localhost:8000/api/v1/chat" \ -H "Content-Type: application/json" \ -d '{ "message": "Can you provide more details?", "session_id": "abc123def456789a" }'4. Client Streaming Example
Section titled “4. Client Streaming Example”If you need to handle streaming responses on the client side:
import requestsimport json
# Start new conversationresponse = requests.post('/api/v1/chat', json={ "message": "Hello, can you help me analyze this document?" # session_id is optional - omit for new conversation}, stream=True)
session_id = None
# Handle streaming responsefor line in response.iter_lines(): if line: message = json.loads(line) print(f"Received: {message['type']} - {message['content']}")
# Extract session_id from session_init message if message['type'] == 'session_init': session_id = message['detail']['session_id'] print(f"New session created: {session_id}")
# Continue conversation with the same session_idif session_id: response = requests.post('/api/v1/chat', json={ "message": "Can you provide more details?", "session_id": session_id }, stream=True)
for line in response.iter_lines(): if line: message = json.loads(line) print(f"Received: {message['type']} - {message['content']}")Production Deployment
Section titled “Production Deployment”Docker Compose (Recommended)
Section titled “Docker Compose (Recommended)”The easiest way to deploy LangCrew with both backend and frontend:
# From the repository rootexport OPENAI_API_KEY=your-openai-key # or ANTHROPIC_API_KEY / DASHSCOPE_API_KEY
# Optional configurationexport LOG_LEVEL=info # debug|info|warning|error
# Start servicesdocker compose up --buildAvailable endpoints:
- Web Chat Interface: http://localhost:3600
- Backend API Documentation: http://localhost:8000/docs
Local Development Setup
Section titled “Local Development Setup”Backend Server:
# 1. Configure API Keyexport OPENAI_API_KEY=your-openai-key # or ANTHROPIC_API_KEY / DASHSCOPE_API_KEY
# 2. Run the Servercd examples/components/web/web_chatuv run run_server.pyThe server will start at http://localhost:8000
Frontend Interface:
# 1. Navigate to web directorycd web/
# 2. Install dependencies and start development serverpnpm installpnpm devOpen your browser to http://localhost:3600/chat