Overview
Dakora provides first-class observability for AI agents and LLM calls. You can:- Ingest OpenTelemetry spans (OTLP/HTTP)
- Log executions directly via the Dakora API
- List and filter executions by provider/model/agent/time/cost
- Inspect execution detail, hierarchy, and a normalized chat/tools timeline
- Analyze per‑template cost and usage
project_id resolves automatically from your API key; for raw REST, call /api/me/context.
Ingest OpenTelemetry Spans (OTLP/HTTP)
Endpoint:POST /api/v1/traces
Content types:
application/x-protobuf(OTLP protobuf)application/json(Dakora OTLP‑compatible JSON)
Create Executions via API
Endpoint:POST /api/projects/{project_id}/executions
List Executions
Endpoint:GET /api/projects/{project_id}/executions
Query params: provider, model, agent_id, has_templates, min_cost, start, end, limit, offset
Execution Detail & Hierarchy
Endpoints:GET /api/projects/{project_id}/executions/{trace_id}GET /api/projects/{project_id}/executions/{trace_id}/hierarchy
Normalized Timeline (Chat + Tools)
Endpoint:GET /api/projects/{project_id}/executions/{trace_id}/timeline
?compact_tools=true to collapse tool call/result pairs into a single event.
Template Analytics
Endpoint:GET /api/projects/{project_id}/prompts/{prompt_id}/analytics
Returns total executions, cost, latency, and tokens aggregated for the template.
Best Practices
- Use idempotent
trace_idgeneration to avoid duplicates on retries. - Include
provider,model,tokens_in/out, andlatency_msto enable accurate cost/usage. - Link templates by embedding metadata or passing
template_usagesfor precise attribution. - Use pagination (
limit,offset) and time windows for efficient listing.