Skip to main content
The Dakora Python SDK provides a simple, async-first interface to manage prompts and analyze executions.

Installation

pip install dakora

Quick Start

import asyncio
from dakora import Dakora

async def main():
    # Initialize client (reads DAKORA_API_KEY from environment)
    client = Dakora()

    # List templates
    templates = await client.prompts.list()

    # Render a template
    result = await client.prompts.render("greeting", {"name": "Alice"})
    print(result.text)

asyncio.run(main())

Configuration

The client reads configuration from environment variables:
export DAKORA_API_KEY="dkr_your_api_key"
export DAKORA_BASE_URL="https://api.dakora.io"  # optional
Or pass them directly:
client = Dakora(
    api_key="dkr_your_api_key",
    base_url="https://api.dakora.io",  # optional
)

Dakora Client

Dakora()

Initialize the Dakora client. Parameters:
ParameterTypeDefaultDescription
api_keystr | NoneNoneAPI key. Falls back to DAKORA_API_KEY env var
base_urlstr | NoneNoneAPI base URL. Falls back to DAKORA_BASE_URL or https://api.dakora.io
project_idstr | NoneNoneProject ID. Auto-resolved from API key if not provided
Example:
# Using environment variables (recommended)
client = Dakora()

# Explicit configuration
client = Dakora(api_key="dkr_xxx", base_url="https://api.dakora.io")

client.close()

Close the HTTP client connection. Optional—usually not needed.
await client.close()

Prompts API

Access via client.prompts.

prompts.list()

List all prompt template IDs in your project. Returns: list[str] — List of template IDs Example:
templates = await client.prompts.list()
# ["greeting", "faq_responder", "email_composer"]

prompts.get(prompt_id)

Get a prompt template by ID. Parameters:
ParameterTypeDescription
prompt_idstrTemplate ID
Returns: dict — Template data including id, template, version, inputs, metadata Example:
prompt = await client.prompts.get("greeting")
print(prompt["template"])  # "Hello {{name}}!"
print(prompt["version"])   # "1.0.0"

prompts.create(...)

Create a new prompt template. Parameters:
ParameterTypeDefaultDescription
prompt_idstrrequiredUnique template ID
templatestrrequiredTemplate text (Jinja2 syntax)
versionstr"1.0.0"Semantic version
descriptionstr | NoneNoneHuman-readable description
inputsdictNoneInput schema definition
metadatadictNoneAdditional metadata
Returns: dict — Created template data Example:
await client.prompts.create(
    prompt_id="welcome_email",
    template="Dear {{name}},\n\nWelcome to {{company}}!",
    description="Welcome email template",
    inputs={
        "name": {"type": "string", "required": True},
        "company": {"type": "string", "required": True},
    },
)

prompts.render(...)

Render a template with input values. Parameters:
ParameterTypeDefaultDescription
template_idstrrequiredTemplate ID to render
inputsdictrequiredVariables to substitute
versionstr | NoneNoneSpecific version (defaults to latest)
embed_metadataboolTrueEmbed tracking metadata in output
Returns: RenderResult — Rendered template with metadata Example:
result = await client.prompts.render(
    "greeting",
    {"name": "Alice", "role": "Developer"},
)
print(result.text)      # "Hello Alice! Welcome, Developer."
print(result.version)   # "1.2.0"
print(result.prompt_id) # "greeting"

RenderResult

The result of rendering a template. Attributes:
AttributeTypeDescription
textstrRendered prompt text
prompt_idstrTemplate ID
versionstrTemplate version used
version_numberint | NoneNumeric version for ordering
inputsdictInput values used
metadatadictAdditional metadata
Example:
result = await client.prompts.render("greeting", {"name": "Alice"})

# Use the rendered text with your LLM
response = openai_client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": result.text}],
)

Executions API

Access via client.executions. Query execution history for analytics and debugging.

executions.list(...)

List executions with optional filters. Parameters:
ParameterTypeDefaultDescription
project_idstrrequiredProject ID
prompt_idstr | NoneNoneFilter by template ID
agent_idstr | NoneNoneFilter by agent ID
providerstr | NoneNoneFilter by provider (openai, anthropic)
modelstr | NoneNoneFilter by model
has_templatesbool | NoneNoneFilter by template linkage
min_costfloat | NoneNoneFilter by minimum cost (USD)
startstr | NoneNoneStart date (ISO format)
endstr | NoneNoneEnd date (ISO format)
limitint100Max results
offsetint0Pagination offset
Returns: list[dict] — List of executions Example:
# Get recent executions
executions = await client.executions.list(
    project_id="proj-123",
    limit=25,
)

# Filter by cost
expensive = await client.executions.list(
    project_id="proj-123",
    min_cost=0.10,  # $0.10 minimum
)

# Filter by template
template_execs = await client.executions.list(
    project_id="proj-123",
    prompt_id="greeting",
)

executions.get(...)

Get detailed execution data. Parameters:
ParameterTypeDefaultDescription
project_idstrrequiredProject ID
trace_idstrrequiredExecution trace ID
span_idstr | NoneNoneSpecific span ID
include_messagesboolFalseInclude full messages
Returns: dict — Execution details Example:
execution = await client.executions.get(
    project_id="proj-123",
    trace_id="trace-456",
    include_messages=True,
)
print(execution["tokens_in"], execution["tokens_out"])
print(execution["cost_usd"])

executions.get_timeline(...)

Get a normalized timeline view of an execution. Parameters:
ParameterTypeDefaultDescription
project_idstrrequiredProject ID
trace_idstrrequiredExecution trace ID
compact_toolsboolTrueCollapse tool call/result pairs
Returns: dict — Timeline with events list Example:
timeline = await client.executions.get_timeline(
    project_id="proj-123",
    trace_id="trace-456",
)

for event in timeline["events"]:
    print(f"{event['type']}: {event.get('content', '')[:50]}")

executions.get_hierarchy(...)

Get the span hierarchy tree for an execution. Parameters:
ParameterTypeDescription
project_idstrProject ID
trace_idstrExecution trace ID
Returns: dict — Hierarchical span tree Get related traces (same session or parent/child). Parameters:
ParameterTypeDescription
project_idstrProject ID
trace_idstrExecution trace ID
Returns: dict — Related traces information

Error Handling

The SDK raises httpx.HTTPStatusError for API errors:
import httpx

try:
    result = await client.prompts.render("nonexistent", {})
except httpx.HTTPStatusError as e:
    print(f"Error {e.response.status_code}: {e.response.text}")

Full Example

import asyncio
from dakora import Dakora

async def main():
    client = Dakora()

    # Create a template
    await client.prompts.create(
        prompt_id="code_review",
        template="""Review this code for:
- Security issues
- Performance problems  
- Best practices

Code:
```{{language}}
{{code}}
```""",
        inputs={
            "language": {"type": "string", "required": True},
            "code": {"type": "string", "required": True},
        },
    )

    # Render it
    result = await client.prompts.render(
        "code_review",
        {
            "language": "python",
            "code": "def add(a, b): return a + b",
        },
    )

    print(result.text)
    print(f"Using template v{result.version}")

asyncio.run(main())