Skip to content

Getting Started

Quick guide to get Web services working in your langcrew application.

LangCrew’s Web module transforms your AI agents into production-ready web services with real-time streaming communication.

Get your agent running as a web service in just 3 lines:

from langcrew import Agent, Crew
from langcrew.web import create_server
agent = Agent(role="Assistant", goal="Help users", backstory="Helpful AI")
server = create_server(Crew(agents=[agent]))
server.run(port=8000) # Visit http://localhost:8000/docs

That’s it! Your agent is now available as a web service with automatic API documentation.

Web services are included in langcrew - no additional installation required:

Terminal window
uv add langcrew --prerelease=allow

Here are more detailed examples for different use cases:

from langcrew import Agent, Crew
from langcrew.web import create_server
# Create agent and crew
agent = Agent(
role="Web Assistant",
goal="Help users through web interface",
backstory="You are a helpful web-based AI assistant",
verbose=True
)
crew = Crew(agents=[agent])
# Create and start server
server = create_server(crew)
# Start server
if __name__ == "__main__":
print("Starting web server at http://0.0.0.0:8000")
server.run(host="0.0.0.0", port=8000)
from langcrew.web import create_langgraph_server
from langgraph.graph import StateGraph
# Create your LangGraph workflow
def create_workflow():
workflow = StateGraph()
# Add your nodes and edges here
return workflow.compile()
# Create server with LangGraph
compiled_graph = create_workflow()
server = create_langgraph_server(compiled_graph)
# Start server
if __name__ == "__main__":
print("Starting LangGraph web server at http://0.0.0.0:8000")
server.run(host="0.0.0.0", port=8000)

Once your server is running, you can:

LangCrew provides a ready-to-use React-based web interface:

Terminal window
# Navigate to the web directory
cd web/
# Install dependencies
pnpm install
# Start the development server
pnpm dev
# Open your browser to http://localhost:3600/chat

The web UI includes:

  • Real-time Chat Interface: Stream responses with typing indicators
  • Tool Call Visualization: See agent tool usage in real-time
  • File Upload Support: Upload documents for analysis
  • Session Management: Maintain conversation history
  • Modern React Components: Built with Antd + Tailwind CSS

Visit http://localhost:8000/docs to see the auto-generated API documentation

Terminal window
# Start new conversation (no session_id)
curl -X POST "http://localhost:8000/api/v1/chat" \
-H "Content-Type: application/json" \
-d '{"message": "Hello, how can you help me?"}'
# Continue existing conversation
curl -X POST "http://localhost:8000/api/v1/chat" \
-H "Content-Type: application/json" \
-d '{
"message": "Can you provide more details?",
"session_id": "abc123def456789a"
}'

If you need to handle streaming responses on the client side:

import requests
import json
# Start new conversation
response = requests.post('/api/v1/chat', json={
"message": "Hello, can you help me analyze this document?"
# session_id is optional - omit for new conversation
}, stream=True)
session_id = None
# Handle streaming response
for line in response.iter_lines():
if line:
message = json.loads(line)
print(f"Received: {message['type']} - {message['content']}")
# Extract session_id from session_init message
if message['type'] == 'session_init':
session_id = message['detail']['session_id']
print(f"New session created: {session_id}")
# Continue conversation with the same session_id
if session_id:
response = requests.post('/api/v1/chat', json={
"message": "Can you provide more details?",
"session_id": session_id
}, stream=True)
for line in response.iter_lines():
if line:
message = json.loads(line)
print(f"Received: {message['type']} - {message['content']}")

The easiest way to deploy LangCrew with both backend and frontend:

Terminal window
# From the repository root
export OPENAI_API_KEY=your-openai-key # or ANTHROPIC_API_KEY / DASHSCOPE_API_KEY
# Optional configuration
export LOG_LEVEL=info # debug|info|warning|error
# Start services
docker compose up --build

Available endpoints:

Backend Server:

Terminal window
# 1. Configure API Key
export OPENAI_API_KEY=your-openai-key # or ANTHROPIC_API_KEY / DASHSCOPE_API_KEY
# 2. Run the Server
cd examples/components/web/web_chat
uv run run_server.py

The server will start at http://localhost:8000

Frontend Interface:

Terminal window
# 1. Navigate to web directory
cd web/
# 2. Install dependencies and start development server
pnpm install
pnpm dev

Open your browser to http://localhost:3600/chat