Skip to content

Memory

Memory is LangCrew’s high-level memory management component built on top of LangGraph’s Store and Checkpointer systems. It provides a unified, easy-to-use interface for persistent state management in AI agent workflows.

Memory is a high-level abstraction that wraps LangGraph’s low-level persistence primitives:

  • Short-term memory: Session-based conversation history using LangGraph’s Checkpointer
  • Long-term memory: Persistent knowledge storage using LangGraph’s Store system
  • Multi-provider support: In-memory, SQLite, PostgreSQL, MySQL (production-ready); Redis, MongoDB (experimental)
  • Unified configuration: Single MemoryConfig class manages all memory types and LangGraph backends

Memory provides a multi-layer architecture around LangGraph’s persistence layer with specialized memory types:

LangCrew implements a sophisticated multi-layer memory system:

┌─────────────────────────────────────────┐
│ Short-Term Memory (STM) │ ← Session/Conversation State
│ (LangGraph Checkpointer) │ - Thread-based
│ │ - Message history
├─────────────────────────────────────────┤
│ Long-Term Memory (LTM) │ ← Cross-Session Knowledge
│ (LangGraph Store) │ - User memory
│ │ - App memory (experimental)
│ ┌───────────────┬───────────────────┐ │
│ │ User Memory │ App Memory │ │
│ │ (Personal) │ (Shared Insights)│ │
│ └───────────────┴───────────────────┘ │
├─────────────────────────────────────────┤
│ Vector Search Layer │ ← Semantic Retrieval
│ (Optional IndexConfig) │ - Embedding-based
│ │ - Similarity search
└─────────────────────────────────────────┘

Critical for Multi-Tenant Systems: The app_id parameter provides namespace isolation when multiple applications share the same database:

from langcrew import MemoryConfig
from langcrew.memory import LongTermMemoryConfig
# Application A
memory_a = MemoryConfig(
provider="postgres",
connection_string="postgresql://shared-db/memory",
long_term=LongTermMemoryConfig(
enabled=True,
app_id="chatbot-prod-v1" # Isolates all memories for App A
)
)
# Application B (same database, completely isolated)
memory_b = MemoryConfig(
provider="postgres",
connection_string="postgresql://shared-db/memory",
long_term=LongTermMemoryConfig(
enabled=True,
app_id="assistant-prod-v2" # Different namespace
)
)

Data Isolation Guarantee:

  • With app_id: User “alice” in App A is completely separate from “alice” in App B
    • App A user memories: ("user_memories", "chatbot-prod-v1", "alice")
    • App B user memories: ("user_memories", "assistant-prod-v2", "alice")
  • Without app_id: Memories lack application-level namespace (not recommended for production)

Best Practice: Always use app_id in production, especially when:

  • Multiple applications share a database
  • You need data isolation between environments (dev/staging/prod)
  • Compliance requires tenant separation

Enable semantic memory retrieval with vector embeddings:

from langcrew import MemoryConfig
from langcrew.memory import LongTermMemoryConfig
memory = MemoryConfig(
provider="postgres",
connection_string="postgresql://localhost/memory",
long_term=LongTermMemoryConfig(
enabled=True,
app_id="my-app",
index={
"dims": 1536, # OpenAI text-embedding-3-small dimension
"embed": "openai:text-embedding-3-small"
}
)
)

Supported Embedding Providers:

  • openai:text-embedding-3-small (1536 dims)
  • openai:text-embedding-3-large (3072 dims)
  • openai:text-embedding-ada-002 (1536 dims)
  • Custom embedding models
from langcrew import MemoryConfig
from langcrew.memory import ShortTermMemoryConfig, LongTermMemoryConfig
# Unified memory configuration
memory_config = MemoryConfig(
provider="sqlite",
connection_string="sqlite:///memory.db",
short_term=ShortTermMemoryConfig(enabled=True),
long_term=LongTermMemoryConfig(
enabled=True,
app_id="my-app", # RECOMMENDED for production
user_memory=MemoryScopeConfig(enabled=True),
app_memory=MemoryScopeConfig(enabled=False) # Experimental
)
)
crew = Crew(
agents=[agent],
memory=memory_config
)

Unified configuration class that manages all three memory types and storage providers.

ParameterTypeDescriptionDefault
enabledboolEnable/disable memory functionalityTrue
providerstrStorage provider: “memory”, “sqlite”, “postgres”, etc.”memory”
connection_stringstr | NoneDatabase connection stringNone
short_termdictShort-term memory configurationSee below
long_termdictLong-term memory configurationSee below

Session-based conversation history using LangGraph’s Checkpointer system for immediate context retention.

Key Features:

  • Thread-based session management
  • Automatic context injection
  • Storage provider override support
  • Local caching for performance

Configuration Parameters:

ParameterTypeDescriptionDefault
enabledboolEnable short-term memoryTrue
providerstr | NoneStorage provider override (inherits if None)None
connection_stringstr | NoneConnection string override (inherits if None)None
from langcrew.memory import MemoryConfig, ShortTermMemoryConfig
# Short-term memory configuration
memory_config = MemoryConfig(
provider="sqlite",
connection_string="sqlite:///memory.db",
short_term=ShortTermMemoryConfig(enabled=True)
)

Persistent knowledge storage using LangGraph’s Store system for cross-session information retention.

Key Features:

  • User Memory: Personal preferences, habits, and context specific to each user
  • App Memory: Shared insights across all users (experimental feature)
  • Semantic Search: Vector-based retrieval for relevant memories
  • Active Learning: Automatic memory triggers based on conversation patterns

Memory Scopes:

ScopeEnabled by DefaultPurposeIsolation Level
User Memory✅ YesPersonal user data and preferencesPer-user per-app
App Memory❌ No (Experimental)Shared application insightsPer-app (all users)

Configuration Parameters:

ParameterTypeDescriptionDefault
enabledboolEnable long-term memoryFalse
providerstr | NoneStorage provider override (inherits if None)None
connection_stringstr | NoneConnection string override (inherits if None)None
app_idstr | NoneApplication identifier (RECOMMENDED for production)None
indexIndexConfig | NoneVector search configurationNone
user_memoryMemoryScopeConfigUser-specific memory configurationenabled
app_memoryMemoryScopeConfigApplication-wide memory (⚠️ experimental)disabled
search_response_formatstrSearch result format (“content” or “content_and_artifact”)“content”

MemoryScopeConfig Parameters:

ParameterTypeDescriptionDefault
enabledboolEnable this memory scopeTrue (user), False (app)
manage_instructionsstrAI instructions for when to save memoriesBuilt-in defaults
search_instructionsstrAI instructions for when to search memoriesBuilt-in defaults
schematypeData schema for content validationstr
actions_permittedtupleAllowed actions: (“create”, “update”, “delete”)All actions
from langcrew.memory import LongTermMemoryConfig, MemoryScopeConfig
from langgraph.store.base import IndexConfig
# Production configuration with all parameters
memory_config = MemoryConfig(
provider="postgres",
connection_string="postgresql://localhost/memory",
long_term=LongTermMemoryConfig(
enabled=True,
app_id="my-app-prod", # RECOMMENDED: Prevents data mixing
# Vector search configuration
index=IndexConfig(
dims=1536,
embed="openai:text-embedding-3-small"
),
# User memory (enabled by default)
user_memory=MemoryScopeConfig(
enabled=True,
manage_instructions="""Call this tool when user:
1. Expresses preferences (I like/love/prefer)
2. Shares personal info (job, location, hobbies)
3. Explicitly asks you to remember something
""",
search_instructions="""Call when you need to:
1. Recall user preferences or context
2. Provide personalized recommendations
""",
schema=str,
actions_permitted=("create", "update", "delete")
),
# App memory (experimental, disabled by default)
app_memory=MemoryScopeConfig(
enabled=False, # ⚠️ Experimental feature
manage_instructions="Store application-wide insights...",
search_instructions="Search for common patterns..."
),
# Search result format
search_response_format="content" # or "content_and_artifact"
)
)

Active Memory Triggers:

The manage_instructions and search_instructions guide the agent on when to proactively save/retrieve memories:

  • Manage (Save): Triggers when users express preferences, share info, or correct previous data
  • Search (Retrieve): Triggers when users ask about themselves or need personalized responses

Learn more: Long-Term Memory Guide

Custom Embedding Models:

For custom embedding models beyond OpenAI, refer to LangGraph’s IndexConfig documentation for full configuration options.

✅ Production-Ready Providers:

  • memory - In-memory storage (development/testing)
  • sqlite - SQLite database (single-user/development)
  • postgresql - PostgreSQL database (production)
  • mysql - MySQL database (production)

⚠️ Experimental Providers:

  • redis - Redis storage (experimental, not fully tested)
  • mongodb - MongoDB storage (experimental, not fully tested)
from langcrew.memory import MemoryConfig, ShortTermMemoryConfig, LongTermMemoryConfig
# Production configuration with PostgreSQL
memory_config = MemoryConfig(
provider="postgresql",
connection_string="postgresql://user:pass@localhost:5432/memory_db",
short_term=ShortTermMemoryConfig(enabled=True),
long_term=LongTermMemoryConfig(enabled=True, app_id="my-app")
)
# Development configuration with SQLite
memory_config = MemoryConfig(
provider="sqlite",
connection_string="sqlite:///memory.db",
short_term=ShortTermMemoryConfig(enabled=True),
long_term=LongTermMemoryConfig(enabled=True, app_id="my-app")
)

Memory integrates seamlessly with all langcrew components:

  • Agents: Agents get memory tools to save and retrieve information proactively
  • Tasks: Tasks access memory through their assigned agent (task context is separate from memory)
  • Crews: Crews manage shared memory resources (checkpointer and store) for all agents
# Shared memory across crew
from langcrew import Agent, Crew
from langcrew.memory import MemoryConfig, ShortTermMemoryConfig, LongTermMemoryConfig
memory_config = MemoryConfig(
provider="sqlite",
connection_string="sqlite:///team_memory.db",
short_term=ShortTermMemoryConfig(enabled=True),
long_term=LongTermMemoryConfig(enabled=True, app_id="my-team")
)
crew = Crew(
agents=[agent],
memory=memory_config # All agents share this memory
)

How it works:

  • Crew: Manages checkpointer (short-term) and store (long-term) instances
  • Agent: Gets memory tools (manage/search) automatically added to its toolset
  • Task: Accesses memory indirectly through its agent, doesn’t manage memory itself
  • Conversational agents needing to remember user preferences and conversation history
  • Multi-session applications requiring continuity across user interactions
  • Knowledge-intensive workflows that need to accumulate and retrieve information
  • Team collaboration scenarios with shared context and entity tracking
  • Customer service applications requiring interaction history and customer profiles