Memory
Memory is LangCrew’s high-level memory management component built on top of LangGraph’s Store and Checkpointer systems. It provides a unified, easy-to-use interface for persistent state management in AI agent workflows.
What is Memory?
Section titled “What is Memory?”Memory is a high-level abstraction that wraps LangGraph’s low-level persistence primitives:
- Short-term memory: Session-based conversation history using LangGraph’s Checkpointer
- Long-term memory: Persistent knowledge storage using LangGraph’s Store system
- Multi-provider support: In-memory, SQLite, PostgreSQL, MySQL (production-ready); Redis, MongoDB (experimental)
- Unified configuration: Single MemoryConfig class manages all memory types and LangGraph backends
Core Architecture
Section titled “Core Architecture”Memory provides a multi-layer architecture around LangGraph’s persistence layer with specialized memory types:
Memory Layers
Section titled “Memory Layers”LangCrew implements a sophisticated multi-layer memory system:
┌─────────────────────────────────────────┐│ Short-Term Memory (STM) │ ← Session/Conversation State│ (LangGraph Checkpointer) │ - Thread-based│ │ - Message history├─────────────────────────────────────────┤│ Long-Term Memory (LTM) │ ← Cross-Session Knowledge│ (LangGraph Store) │ - User memory│ │ - App memory (experimental)│ ┌───────────────┬───────────────────┐ ││ │ User Memory │ App Memory │ ││ │ (Personal) │ (Shared Insights)│ ││ └───────────────┴───────────────────┘ │├─────────────────────────────────────────┤│ Vector Search Layer │ ← Semantic Retrieval│ (Optional IndexConfig) │ - Embedding-based│ │ - Similarity search└─────────────────────────────────────────┘Application Isolation with app_id
Section titled “Application Isolation with app_id”Critical for Multi-Tenant Systems: The app_id parameter provides namespace isolation when multiple applications share the same database:
from langcrew import MemoryConfigfrom langcrew.memory import LongTermMemoryConfig
# Application Amemory_a = MemoryConfig( provider="postgres", connection_string="postgresql://shared-db/memory", long_term=LongTermMemoryConfig( enabled=True, app_id="chatbot-prod-v1" # Isolates all memories for App A ))
# Application B (same database, completely isolated)memory_b = MemoryConfig( provider="postgres", connection_string="postgresql://shared-db/memory", long_term=LongTermMemoryConfig( enabled=True, app_id="assistant-prod-v2" # Different namespace ))Data Isolation Guarantee:
- With app_id: User “alice” in App A is completely separate from “alice” in App B
- App A user memories:
("user_memories", "chatbot-prod-v1", "alice") - App B user memories:
("user_memories", "assistant-prod-v2", "alice")
- App A user memories:
- Without app_id: Memories lack application-level namespace (not recommended for production)
Best Practice: Always use app_id in production, especially when:
- Multiple applications share a database
- You need data isolation between environments (dev/staging/prod)
- Compliance requires tenant separation
Vector Search Integration
Section titled “Vector Search Integration”Enable semantic memory retrieval with vector embeddings:
from langcrew import MemoryConfigfrom langcrew.memory import LongTermMemoryConfig
memory = MemoryConfig( provider="postgres", connection_string="postgresql://localhost/memory", long_term=LongTermMemoryConfig( enabled=True, app_id="my-app", index={ "dims": 1536, # OpenAI text-embedding-3-small dimension "embed": "openai:text-embedding-3-small" } ))Supported Embedding Providers:
openai:text-embedding-3-small(1536 dims)openai:text-embedding-3-large(3072 dims)openai:text-embedding-ada-002(1536 dims)- Custom embedding models
from langcrew import MemoryConfigfrom langcrew.memory import ShortTermMemoryConfig, LongTermMemoryConfig
# Unified memory configurationmemory_config = MemoryConfig( provider="sqlite", connection_string="sqlite:///memory.db", short_term=ShortTermMemoryConfig(enabled=True), long_term=LongTermMemoryConfig( enabled=True, app_id="my-app", # RECOMMENDED for production user_memory=MemoryScopeConfig(enabled=True), app_memory=MemoryScopeConfig(enabled=False) # Experimental ))
crew = Crew( agents=[agent], memory=memory_config)Key Components
Section titled “Key Components”MemoryConfig
Section titled “MemoryConfig”Unified configuration class that manages all three memory types and storage providers.
Core Parameters
Section titled “Core Parameters”| Parameter | Type | Description | Default |
|---|---|---|---|
enabled | bool | Enable/disable memory functionality | True |
provider | str | Storage provider: “memory”, “sqlite”, “postgres”, etc. | ”memory” |
connection_string | str | None | Database connection string | None |
short_term | dict | Short-term memory configuration | See below |
long_term | dict | Long-term memory configuration | See below |
Short-term Memory
Section titled “Short-term Memory”Session-based conversation history using LangGraph’s Checkpointer system for immediate context retention.
Key Features:
- Thread-based session management
- Automatic context injection
- Storage provider override support
- Local caching for performance
Configuration Parameters:
| Parameter | Type | Description | Default |
|---|---|---|---|
enabled | bool | Enable short-term memory | True |
provider | str | None | Storage provider override (inherits if None) | None |
connection_string | str | None | Connection string override (inherits if None) | None |
from langcrew.memory import MemoryConfig, ShortTermMemoryConfig
# Short-term memory configurationmemory_config = MemoryConfig( provider="sqlite", connection_string="sqlite:///memory.db", short_term=ShortTermMemoryConfig(enabled=True))Long-term Memory
Section titled “Long-term Memory”Persistent knowledge storage using LangGraph’s Store system for cross-session information retention.
Key Features:
- User Memory: Personal preferences, habits, and context specific to each user
- App Memory: Shared insights across all users (experimental feature)
- Semantic Search: Vector-based retrieval for relevant memories
- Active Learning: Automatic memory triggers based on conversation patterns
Memory Scopes:
| Scope | Enabled by Default | Purpose | Isolation Level |
|---|---|---|---|
| User Memory | ✅ Yes | Personal user data and preferences | Per-user per-app |
| App Memory | ❌ No (Experimental) | Shared application insights | Per-app (all users) |
Configuration Parameters:
| Parameter | Type | Description | Default |
|---|---|---|---|
enabled | bool | Enable long-term memory | False |
provider | str | None | Storage provider override (inherits if None) | None |
connection_string | str | None | Connection string override (inherits if None) | None |
app_id | str | None | Application identifier (RECOMMENDED for production) | None |
index | IndexConfig | None | Vector search configuration | None |
user_memory | MemoryScopeConfig | User-specific memory configuration | enabled |
app_memory | MemoryScopeConfig | Application-wide memory (⚠️ experimental) | disabled |
search_response_format | str | Search result format (“content” or “content_and_artifact”) | “content” |
MemoryScopeConfig Parameters:
| Parameter | Type | Description | Default |
|---|---|---|---|
enabled | bool | Enable this memory scope | True (user), False (app) |
manage_instructions | str | AI instructions for when to save memories | Built-in defaults |
search_instructions | str | AI instructions for when to search memories | Built-in defaults |
schema | type | Data schema for content validation | str |
actions_permitted | tuple | Allowed actions: (“create”, “update”, “delete”) | All actions |
from langcrew.memory import LongTermMemoryConfig, MemoryScopeConfigfrom langgraph.store.base import IndexConfig
# Production configuration with all parametersmemory_config = MemoryConfig( provider="postgres", connection_string="postgresql://localhost/memory", long_term=LongTermMemoryConfig( enabled=True, app_id="my-app-prod", # RECOMMENDED: Prevents data mixing
# Vector search configuration index=IndexConfig( dims=1536, embed="openai:text-embedding-3-small" ),
# User memory (enabled by default) user_memory=MemoryScopeConfig( enabled=True, manage_instructions="""Call this tool when user: 1. Expresses preferences (I like/love/prefer) 2. Shares personal info (job, location, hobbies) 3. Explicitly asks you to remember something """, search_instructions="""Call when you need to: 1. Recall user preferences or context 2. Provide personalized recommendations """, schema=str, actions_permitted=("create", "update", "delete") ),
# App memory (experimental, disabled by default) app_memory=MemoryScopeConfig( enabled=False, # ⚠️ Experimental feature manage_instructions="Store application-wide insights...", search_instructions="Search for common patterns..." ),
# Search result format search_response_format="content" # or "content_and_artifact" ))Active Memory Triggers:
The manage_instructions and search_instructions guide the agent on when to proactively save/retrieve memories:
- Manage (Save): Triggers when users express preferences, share info, or correct previous data
- Search (Retrieve): Triggers when users ask about themselves or need personalized responses
Learn more: Long-Term Memory Guide
Custom Embedding Models:
For custom embedding models beyond OpenAI, refer to LangGraph’s IndexConfig documentation for full configuration options.
Storage Providers
Section titled “Storage Providers”✅ Production-Ready Providers:
memory- In-memory storage (development/testing)sqlite- SQLite database (single-user/development)postgresql- PostgreSQL database (production)mysql- MySQL database (production)
⚠️ Experimental Providers:
redis- Redis storage (experimental, not fully tested)mongodb- MongoDB storage (experimental, not fully tested)
from langcrew.memory import MemoryConfig, ShortTermMemoryConfig, LongTermMemoryConfig
# Production configuration with PostgreSQLmemory_config = MemoryConfig( provider="postgresql", connection_string="postgresql://user:pass@localhost:5432/memory_db", short_term=ShortTermMemoryConfig(enabled=True), long_term=LongTermMemoryConfig(enabled=True, app_id="my-app"))
# Development configuration with SQLitememory_config = MemoryConfig( provider="sqlite", connection_string="sqlite:///memory.db", short_term=ShortTermMemoryConfig(enabled=True), long_term=LongTermMemoryConfig(enabled=True, app_id="my-app"))Integration with langcrew
Section titled “Integration with langcrew”Memory integrates seamlessly with all langcrew components:
- Agents: Agents get memory tools to save and retrieve information proactively
- Tasks: Tasks access memory through their assigned agent (task context is separate from memory)
- Crews: Crews manage shared memory resources (checkpointer and store) for all agents
# Shared memory across crewfrom langcrew import Agent, Crewfrom langcrew.memory import MemoryConfig, ShortTermMemoryConfig, LongTermMemoryConfig
memory_config = MemoryConfig( provider="sqlite", connection_string="sqlite:///team_memory.db", short_term=ShortTermMemoryConfig(enabled=True), long_term=LongTermMemoryConfig(enabled=True, app_id="my-team"))
crew = Crew( agents=[agent], memory=memory_config # All agents share this memory)How it works:
- Crew: Manages checkpointer (short-term) and store (long-term) instances
- Agent: Gets memory tools (manage/search) automatically added to its toolset
- Task: Accesses memory indirectly through its agent, doesn’t manage memory itself
When to Use Memory
Section titled “When to Use Memory”- Conversational agents needing to remember user preferences and conversation history
- Multi-session applications requiring continuity across user interactions
- Knowledge-intensive workflows that need to accumulate and retrieve information
- Team collaboration scenarios with shared context and entity tracking
- Customer service applications requiring interaction history and customer profiles