Norns Memory System
The Raven Model cognitive architecture for long-term memory, learning, and RAG.
Overview
The Norns agent implements a three-plane cognitive architecture inspired by Odin's ravens (Huginn and Muninn), providing real-time state management, contextual awareness, long-term learning, and document-based RAG.
┌─────────────────────────────────────────────────────────────┐
│ HUGINN (State Plane) - Real-time Session State │
│ - Conversation flow and immediate context │
│ - Active messages and task state │
│ - Redis-backed (session_id keyed, fast access) │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ CONTEXT (Identity Plane) - Assembled User Context │
│ - UserIdentity, DomainAccess permissions │
│ - Active projects, calendar, voice mode │
│ - Applicable rules and defaults │
└─────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────┐
│ MUNINN (Memory Plane) - Long-term Memory & Learning │
│ - Episodic: Raw experiences (interactions, observations) │
│ - Semantic: Learned patterns and correlations │
│ - Knowledge: Entity graph (people, places, concepts) │
│ - Documents: RAG chunks for grounded responses │
│ - Vector embeddings for unified semantic search │
└─────────────────────────────────────────────────────────────┘
Software Stack
FastAPI Server (norns-agent:8000)
├── LangGraph Graph Processing
├── Database Services (PostgreSQL + pgvector)
│ ├── ContextService (identity, domains)
│ ├── MuninnContextService (memory assembly)
│ ├── EpisodicMemoryService (experience storage)
│ ├── SemanticMemoryService (pattern learning)
│ └── DocumentService (RAG ingestion & retrieval)
├── Cache Layer (Redis)
├── Embedding Service
│ ├── Ollama (primary - nomic-embed-text, 768 dims)
│ └── OpenAI (fallback - ada-002, 1536 dims)
├── LLM (Claude via Anthropic API)
└── Integrations (Slack, Calendar, Home Assistant)
Unified Memory Search
The memory system provides unified search across 4 memory types:
| Type | Table | Purpose |
|---|---|---|
episodic | episodic_memories | Past interactions and observations |
semantic | semantic_patterns | Learned behavioral patterns |
knowledge | knowledge_entities | Named entities and relationships |
documents | document_chunks | RAG document chunks |
Endpoint: POST /api/memories/search
class MemorySearchRequest(BaseModel):
query: str # Text to search for
user_id: str # User UUID
memory_types: list[str] # ["episodic", "semantic", "knowledge", "documents"]
limit: int = 10 # Max results per type
All memory types use the same embedding model and vector similarity search, enabling seamless cross-memory retrieval.
Memory Layers
Episodic Memory
Stores raw experiences as searchable vector-embedded memories.
Table: episodic_memories
| Column | Type | Purpose |
|---|---|---|
memory_id | UUID | Primary key |
user_id | UUID | Memory owner |
episode_type | varchar | interaction, observation |
source_channel | varchar | slack, voice, api |
raw_content | text | Full message + response |
content_embedding | vector[768] | Semantic search vector |
context_snapshot | jsonb | Full context at interaction time |
importance_score | float | 0.0-1.0, heuristic-based |
extracted_entities | jsonb | NER results |
extracted_intents | jsonb | Intent classification |
access_count | int | Usage tracking |
consolidated_to_semantic | bool | Marked for pattern extraction |
Key Operations:
record_interaction()- Store chat with embeddingsrecord_observation()- Store system observationssearch_memories()- Vector similarity searchmark_for_consolidation()- Flag for pattern extraction
Importance Scoring:
- Base score: 0.5
- Length factor: +0.1 each for >500 and >1000 chars
- Entity factor: +0.05 per entity (max 0.2)
- Action intent: +0.1 (create, update, complete, delete)
- Question factor: +0.05 (learning opportunity)
Semantic Memory
Extracts patterns and learns from episodic experiences.
Table: semantic_patterns
| Column | Type | Purpose |
|---|---|---|
pattern_id | UUID | Primary key |
user_id | UUID | Pattern owner |
pattern_type | varchar | behavioral, temporal, causal, preference |
pattern_category | varchar | action_frequency, activity_time, domain_focus |
pattern_name | varchar | Human readable name |
pattern_embedding | vector[768] | For pattern search |
confidence_score | float | 0.0-1.0 |
evidence_count | int | Supporting episode count |
status | varchar | emerging → active → deprecated |
derived_from_episodes | uuid[] | Source episodic memories |
Table: semantic_correlations
| Column | Type | Purpose |
|---|---|---|
concept_a_type | varchar | domain, time, entity |
concept_a_value | varchar | Specific value |
concept_b_type | varchar | Correlation target type |
concept_b_value | varchar | Correlation target value |
correlation_strength | float | 0.0-1.0 |
Table: action_outcome_mappings
| Column | Type | Purpose |
|---|---|---|
action_type | varchar | task_scheduling, priority_assignment |
action_context | jsonb | Context of action |
outcome_type | varchar | success, failure, partial |
effectiveness_score | float | 0.0-1.0 |
Pattern Types:
- Temporal: Peak activity hours, days of week
- Behavioral: Repeated actions, frequent task creation
- Domain: Which domains user focuses on
- Preference: Consistent choices over time
Knowledge Entities
Entity graph for semantic understanding and disambiguation.
Table: knowledge_entities
| Column | Type | Purpose |
|---|---|---|
entity_id | UUID | Primary key |
entity_type | varchar | person, place, project, concept |
entity_name | varchar | Primary name |
entity_aliases | varchar[] | Alternative names |
entity_embedding | vector[768] | For disambiguation |
properties | jsonb | Entity-specific data |
confidence | float | 0.0-1.0 |
Admin UI: norns.ravenhelm.dev/knowledge
- Create/edit entities with type, description, aliases
- Set confidence scores
- Add custom properties (JSON)
Document RAG
Upload and retrieve document chunks for grounded responses.
Table: document_chunks
| Column | Type | Purpose |
|---|---|---|
chunk_id | UUID | Primary key |
document_id | UUID | Parent document |
user_id | UUID | Document owner |
document_name | varchar | Original filename |
document_type | varchar | general, technical, personal, reference |
chunk_index | int | 0-based position in document |
chunk_content | text | Raw text of chunk |
chunk_embedding | vector[768] | Semantic search vector |
metadata | jsonb | {original_size, total_chunks} |
created_at | timestamptz | Upload timestamp |
Admin UI: norns.ravenhelm.dev/documents
- Drag-and-drop upload (.txt, .md, .pdf)
- Select document type
- Configure chunk size (100-5000 chars, default 1000)
- View documents grouped by type
- Delete documents with confirmation
RAG Pipeline
Document Ingestion Flow
┌─────────────────────────────────────────────────────────────────┐
│ Documents Admin UI (norns.ravenhelm.dev/documents) │
│ - Drag-and-drop or file browser │
│ - Document type: general, technical, personal, reference │
│ - Chunk size: 100-5000 chars (default 1000) │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ Upload Endpoint: POST /api/documents/upload │
│ - Validate UTF-8 encoding │
│ - Generate document_id (UUID) │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ Chunking: Word-based, character-bounded │
│ - Split text by whitespace │
│ - Accumulate words until chunk_size threshold │
│ - Respects word boundaries (never splits mid-word) │
│ - Each chunk gets sequential chunk_index │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ Embedding: Ollama (nomic-embed-text, 768 dims) │
│ - Each chunk embedded individually │
│ - Fallback: OpenAI ada-002 (1536 dims) │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ Storage: PostgreSQL + pgvector │
│ - INSERT into document_chunks │
│ - chunk_embedding as vector(768) │
│ - metadata: {original_size, total_chunks} │
└─────────────────────────────────────────────────────────────────┘
Document Retrieval at Runtime
User Question → Agent
↓
┌─────────────────────────────────────────────────────────────────┐
│ Embed Query │
│ - Same embedding model as ingestion (Ollama/OpenAI) │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ Vector Similarity Search │
│ SELECT chunk_id, document_name, chunk_content, │
│ 1 - (chunk_embedding <=> $query_vector) as similarity │
│ FROM document_chunks │
│ WHERE user_id = $user_id │
│ AND chunk_embedding IS NOT NULL │
│ ORDER BY similarity DESC │
│ LIMIT $limit │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ Combine with Other Memory Types │
│ - Episodic memories (past interactions) │
│ - Semantic patterns (learned behaviors) │
│ - Knowledge entities (named entities) │
│ - Document chunks (RAG) │
└─────────────────────────────────────────────────────────────────┘
↓
┌─────────────────────────────────────────────────────────────────┐
│ Agent Context Assembly │
│ - MemoryWorkers integrate all memory types │
│ - Documents included as "RELEVANT DOCUMENTS" in prompt │
│ - Ranked by similarity score (0-1) │
└─────────────────────────────────────────────────────────────────┘
↓
Agent Response (grounded in retrieved documents)
Chunking Algorithm
def chunk_document(text: str, chunk_size: int = 1000) -> list[str]:
chunks = []
words = text.split()
current_chunk = []
current_size = 0
for word in words:
current_chunk.append(word)
current_size += len(word) + 1 # +1 for space
if current_size >= chunk_size:
chunks.append(" ".join(current_chunk))
current_chunk = []
current_size = 0
if current_chunk:
chunks.append(" ".join(current_chunk))
return chunks
Muninn Context Service
The main memory interface that assembles context for agent processing.
ContextBundle Output:
@dataclass
class ContextBundle:
episodes: list[dict] # Recent relevant memories
patterns: list[dict] # Applicable semantic patterns
entities: list[dict] # Known entities
documents: list[dict] # Relevant document chunks
suggestions: list[str] # Memory-based recommendations
built_at: datetime
query_used: Optional[str]
Key Methods:
build_context()- Assemble full context bundle (all 4 memory types)record_interaction()- Store interaction and trigger memory recordingrecall_similar()- Explicit memory recall across all typesget_entity()/record_entity()- Entity management
Bifrost MCP Integration
Bifrost exposes Norns capabilities as MCP (Model Context Protocol) tools.
Claude Code / External Service
↓
POST /mcp/tools/call (X-API-Key auth)
↓
Bifrost API (tool registry, auth, audit)
↓
Norns Agent execution
↓
Response + tool_executions audit log
Key Tables:
tool_definitions- MCP tool registrytool_executions- Audit log of tool callsapi_keys- Scoped authentication
Dataflows
Chat Interaction (with RAG)
USER MESSAGE (Slack)
↓
POST /slack/events
↓
┌─ CONTEXT ASSEMBLY (Huginn) ─────────────────────────────────┐
│ • Resolve user identity │
│ • Load domain permissions │
│ • Get active projects, calendar, work hours │
│ • Cache in Redis (5 min TTL) │
└─────────────────────────────────────────────────────────────┘
↓
┌─ MEMORY CONTEXT (Muninn) ───────────────────────────────────┐
│ • Generate embedding: ollama.embed(message) │
│ • Search ALL memory types: │
│ - episodic_memories (past interactions) │
│ - semantic_patterns (learned behaviors) │
│ - knowledge_entities (named entities) │
│ - document_chunks (RAG documents) │
│ • Build unified ContextBundle │
└─────────────────────────────────────────────────────────────┘
↓
┌─ LANGGRAPH EXECUTION ───────────────────────────────────────┐
│ • Initialize AgentState with messages + context │
│ • Include relevant documents in system prompt │
│ • Call Claude with grounded context │
│ • Execute tools if needed │
│ • Generate response citing document sources │
└─────────────────────────────────────────────────────────────┘
↓
┌─ MEMORY RECORDING ──────────────────────────────────────────┐
│ • Generate embedding for (message + response) │
│ • Calculate importance_score │
│ • INSERT into episodic_memories │
│ • Add to episode_sequence (conversation grouping) │
│ • Record observation for pattern detection │
└─────────────────────────────────────────────────────────────┘
↓
SLACK RESPONSE
Memory Recall
User: "Remember when I said I wanted to learn Spanish?"
recall_memories("spanish", user_id)
↓
muninn.recall_similar(user_uuid, "spanish")
↓
Parallel search across all memory types:
• episodic_memories → past conversations
• semantic_patterns → learning preferences
• knowledge_entities → "Spanish" as concept
• document_chunks → Spanish learning materials
↓
Combine and rank by similarity
↓
Return: [{type, id, content, similarity}, ...]
Pattern Consolidation (Background)
Runs periodically to extract patterns from aged memories.
semantic.consolidate_episodes(user_id, min_age_days=7)
↓
SELECT unconsolidated episodic_memories from 7+ days ago
↓
extract_patterns():
• _analyze_temporal_patterns() → Peak hours, weekday patterns
• _analyze_intent_patterns() → Frequent actions
• _analyze_domain_patterns() → Domain focus areas
↓
For each pattern:
• If similar exists → update confidence (max 0.95)
• If new → generate embedding, INSERT
↓
detect_correlations():
• Find co-occurring concepts (domain+time)
• INSERT into semantic_correlations
↓
Mark episodes as consolidated
Bifrost Tool Invocation
External Service → POST /mcp/tools/call
↓
Authenticate with X-API-Key
↓
Look up tool_definitions row
├── Get http_config (endpoint, method, headers)
├── Get auth_config_ref
└── Validate input against schema
↓
Transform MCP request → HTTP request
↓
Call Norns endpoint
↓
INSERT into tool_executions (audit log)
↓
Return response to caller
Domain Model
Users interact with 8 life domains:
| Domain | Icon | Description |
|---|---|---|
hrafnhoard | 💰 | Personal Finance |
ravenhelm | ⚔️ | Work & Business |
idunns_garden | 🌸 | Family & Relationships |
eirs_vitality | ❤️ | Health & Fitness |
bragis_quill | 🖋️ | Writing & Creative |
midgard | 🏠 | Home & Property |
friggs_hearth | 🔥 | Household Operations |
mimirs_legacy | 📚 | Digital Legacy |
Memory is scoped to domains where applicable, allowing domain-specific pattern learning.
Key Files
| File | Purpose |
|---|---|
agent/main.py | FastAPI endpoints (upload, list, delete, search) |
agent/memory/episodic.py | Episodic memory CRUD |
agent/memory/semantic.py | Pattern extraction & learning |
agent/memory/muninn_context.py | Memory assembly interface |
agent/memory/embeddings.py | Embedding provider abstraction |
agent/agents/workers/memory_workers.py | RAG context integration |
agent/graph.py | LangGraph state machine |
agent/context_service.py | Context loading & caching |
admin/app/(authenticated)/documents/page.tsx | Document admin UI |
admin/app/(authenticated)/knowledge/page.tsx | Knowledge entity admin UI |
bifrost/api/main.py | MCP gateway server |
Configuration
Embedding Settings
environment:
OLLAMA_URL: http://ollama:11434
OLLAMA_EMBED_MODEL: nomic-embed-text
EMBEDDING_PROVIDER: auto # auto|ollama|openai|mock
Supported Embedding Models
| Provider | Model | Dimensions |
|---|---|---|
| Ollama | nomic-embed-text | 768 (default) |
| Ollama | mxbai-embed-large | 1024 |
| Ollama | all-minilm | 384 |
| OpenAI | text-embedding-ada-002 | 1536 |
Memory Thresholds
| Setting | Value | Purpose |
|---|---|---|
| Similarity threshold | 0.5 | Minimum for semantic match |
| Default chunk size | 1000 | Characters per document chunk |
| Consolidation age | 7 days | Min age before pattern extraction |
| Pattern deprecation | 90 days | Archive unreinforced patterns |
| Redis context TTL | 5 min | Context cache expiration |
| Redis identity TTL | 1 hour | Identity cache expiration |
Quick Commands
# View memory-related logs
docker logs norns-agent 2>&1 | grep -i "memory\|muninn\|episodic\|document"
# Check embedding service
docker exec ollama ollama list
# Query all memory types
docker exec -i postgres psql -U ravenhelm -d ravenmaskos -c "
SELECT 'episodic' as type, COUNT(*) FROM episodic_memories
UNION ALL
SELECT 'semantic', COUNT(*) FROM semantic_patterns
UNION ALL
SELECT 'knowledge', COUNT(*) FROM knowledge_entities
UNION ALL
SELECT 'documents', COUNT(*) FROM document_chunks;
"
# Query document stats
docker exec -i postgres psql -U ravenhelm -d ravenmaskos -c "
SELECT document_type, COUNT(DISTINCT document_id) as docs, COUNT(*) as chunks
FROM document_chunks
GROUP BY document_type;
"
# Check consolidation status
docker exec -i postgres psql -U ravenhelm -d ravenmaskos -c \
"SELECT consolidated_to_semantic, COUNT(*) FROM episodic_memories GROUP BY 1;"
Troubleshooting
Memory Search Returns No Results
Symptoms: Agent doesn't recall relevant past interactions or documents
Diagnosis:
# Check embedding service
curl -s http://ollama:11434/api/tags | jq .
# Verify memories exist with embeddings
docker exec -i postgres psql -U ravenhelm -d ravenmaskos -c "
SELECT 'episodic' as type,
COUNT(*) as total,
COUNT(*) FILTER (WHERE content_embedding IS NOT NULL) as with_embedding
FROM episodic_memories
UNION ALL
SELECT 'documents',
COUNT(*),
COUNT(*) FILTER (WHERE chunk_embedding IS NOT NULL)
FROM document_chunks;
"
Solutions:
- Verify Ollama is running with nomic-embed-text model
- Check pgvector extension is installed
- Confirm memories/documents have embeddings (not NULL)
- Lower similarity threshold if too restrictive
Document Upload Fails
Symptoms: Error when uploading documents
Diagnosis:
docker logs norns-agent 2>&1 | grep -i "upload\|document\|error"
Solutions:
- Verify file is UTF-8 encoded text
- Check file size limits
- Ensure Ollama is responding for embeddings
- Check database connectivity
Pattern Consolidation Not Running
Symptoms: No semantic patterns being created
Diagnosis:
# Check for unconsolidated old memories
docker exec -i postgres psql -U ravenhelm -d ravenmaskos -c \
"SELECT COUNT(*) FROM episodic_memories
WHERE consolidated_to_semantic = FALSE
AND occurred_at < NOW() - INTERVAL '7 days';"
Solutions:
- Verify consolidation background job is scheduled
- Check for errors in agent logs
- Manually trigger consolidation via API
High Memory Latency
Symptoms: Slow responses when context building
Diagnosis:
# Check Redis connectivity
docker exec redis redis-cli PING
# Check vector indexes exist
docker exec -i postgres psql -U ravenhelm -d ravenmaskos -c "
SELECT tablename, indexname FROM pg_indexes
WHERE tablename IN ('episodic_memories', 'document_chunks', 'semantic_patterns');
"
Solutions:
- Verify Redis is responding quickly
- Add IVFFlat or HNSW index to embedding columns:
CREATE INDEX idx_doc_chunks_embedding ON document_chunks
USING ivfflat (chunk_embedding vector_cosine_ops) WITH (lists = 100);
- Reduce similarity search limit
PDF Support & Adaptive Chunking
Added: 2026-01-03
Enhanced Document Upload
The document upload endpoint now supports:
- PDF files via pdfplumber extraction
- Adaptive chunking based on model context window
- Page-aware chunking for PDFs with page number tracking
- Document metadata extraction (title, author, subject)
Upload Endpoint Parameters
Endpoint: POST /api/documents/upload
| Parameter | Type | Default | Description |
|---|---|---|---|
file | File | required | PDF or text file |
user_id | UUID | required | Document owner |
document_type | string | "general" | Category: general, technical, personal, reference |
chunk_strategy | string | "adaptive" | See chunking strategies below |
chunk_size | int | auto | Override automatic chunk sizing (tokens) |
model_context | string | "default" | Model name for adaptive sizing |
Chunking Strategies
| Strategy | Description | Best For |
|---|---|---|
adaptive | 5% of model context window (256-4096 tokens) | General use, balances retrieval granularity |
page_based | Respects PDF page boundaries, tracks page numbers | PDFs where page context matters |
semantic | Paragraph-based, 512 tokens | Narrative documents |
fixed | Fixed 1000 tokens | Consistent chunk sizes |
Context Window Sizing
When using adaptive strategy, chunk size is calculated based on the model:
| Model | Context Window | Chunk Size (5%) |
|---|---|---|
| claude-3-5-sonnet | 200K | 4096 (capped) |
| claude-3-opus | 200K | 4096 (capped) |
| gpt-4-turbo | 128K | 4096 (capped) |
| llama-3.1-70b | 128K | 4096 (capped) |
| gpt-4 | 8K | 409 |
| default | 8K | 409 |
Min: 256 tokens, Max: 4096 tokens, with 10% overlap between chunks.
Document Metadata Table
Table: documents
| Column | Type | Purpose |
|---|---|---|
document_id | UUID | Primary key |
user_id | UUID | Document owner |
filename | varchar | Original filename |
content_type | varchar | MIME type (application/pdf, text/plain) |
file_size_bytes | int | File size |
total_pages | int | PDF page count |
title | text | PDF metadata: title |
author | text | PDF metadata: author |
subject | text | PDF metadata: subject |
chunk_strategy | varchar | Strategy used |
chunk_size_tokens | int | Actual chunk size |
total_chunks | int | Number of chunks created |
embedding_model | varchar | Model used for embeddings |
uploaded_at | timestamptz | Upload timestamp |
processed_at | timestamptz | Processing completion |
metadata | jsonb | Additional metadata |
Page Number Tracking
Enhanced document_chunks columns:
| Column | Type | Purpose |
|---|---|---|
page_number | int | Primary page for this chunk (PDF only) |
total_pages | int | Total document pages |
content_type | varchar | MIME type of source |
When using page_based strategy, chunks maintain page boundaries. For other strategies, page numbers are approximated based on content position.
Example Usage
# Upload PDF with adaptive chunking for Claude
curl -X POST 'https://norns-pm.ravenhelm.dev/api/documents/upload' \
-H 'X-API-Key: <key>' \
-F 'file=@technical-spec.pdf' \
-F 'user_id=<uuid>' \
-F 'document_type=technical' \
-F 'chunk_strategy=adaptive' \
-F 'model_context=claude-3-5-sonnet'
# Upload PDF preserving page boundaries
curl -X POST 'https://norns-pm.ravenhelm.dev/api/documents/upload' \
-H 'X-API-Key: <key>' \
-F 'file=@manual.pdf' \
-F 'user_id=<uuid>' \
-F 'document_type=reference' \
-F 'chunk_strategy=page_based'
# Upload with custom chunk size
curl -X POST 'https://norns-pm.ravenhelm.dev/api/documents/upload' \
-H 'X-API-Key: <key>' \
-F 'file=@notes.txt' \
-F 'user_id=<uuid>' \
-F 'chunk_size=500'
Response Format
{
"document_id": "fc505fe0-c9c2-423c-a087-26e6fabacfdb",
"filename": "technical-spec.pdf",
"content_type": "application/pdf",
"total_pages": 42,
"chunks_created": 156,
"chunk_strategy": "adaptive",
"chunk_size_tokens": 4096,
"document_type": "technical",
"metadata": {
"title": "System Architecture Specification",
"author": "Engineering Team"
}
}
Key Files (PDF Support)
| File | Purpose |
|---|---|
agent/memory/document_processor.py | PDF extraction, adaptive chunking, file detection |
agent/requirements.txt | Added: pdfplumber, python-magic, tiktoken |
migrations/006_rag_pdf_support.sql | Schema updates for PDF metadata |
Troubleshooting PDF Upload
PDF extraction fails:
# Check pdfplumber is installed
docker exec norns-agent pip list | grep pdfplumber
# Test PDF directly
docker exec norns-agent python3 -c "import pdfplumber; print('OK')"
Content type detection fails:
# python-magic requires libmagic
docker exec norns-agent python3 -c "import magic; print('OK')"
If missing, fallback uses file extension.
Vector dimension mismatch:
-- Verify 768-dimensional vectors
SELECT pg_typeof(chunk_embedding),
vector_dims(chunk_embedding)
FROM document_chunks LIMIT 1;
Should show vector type with 768 dimensions for Ollama.
Debugging RAG Issues
If documents are being found but not reflected in responses:
- Check similarity threshold: Default is 0.3, may need adjustment
- Check content preview length: Currently 800 chars per document chunk in prompt
- Verify documents in prompt: Check logs for "Memory prompt generated with X episodes, Y documents"
The spouse/family info requires 800 char preview because personal details appear later in the Identity chunk (around position 446+).