LLM Observability
Langfuse-based tracing and metrics for LLM workflows.
Overview
RavenmaskOS uses Langfuse to capture LLM traces, latency, token usage, and cost across agents and tools.
- Langfuse UI:
https://langfuse.ravenhelm.dev - Default project:
LangGraph Agents
Data Sources
LLM telemetry is sent by the Norns agent using the following environment variables:
LANGFUSE_PUBLIC_KEY
LANGFUSE_SECRET_KEY
LANGFUSE_HOST=https://langfuse.ravenhelm.dev
LANGFUSE_URL=http://langfuse-web:3000
Key Views
- Traces: End-to-end request paths
- Spans: Tool calls and model invocations
- Usage: Token counts and cost per model
- Errors: Failed calls and retries
Common Checks
- Confirm Langfuse containers are healthy:
docker ps | grep langfuse
- Check agent logs for telemetry errors:
docker logs norns-agent --tail 200
Related
- AI/ML Stack - Langfuse deployment
- Observability/Langfuse