Typescript sdk

CLI Observability with Langfuse

Copy page

Track LLM operations during pull command with Langfuse

The Inkeep CLI includes built-in observability for tracking LLM operations during the pull command. This allows you to monitor costs, latency, and quality of AI-generated code across different LLM providers.

Overview

When you run inkeep pull, the CLI uses LLMs to generate TypeScript files for your agents, tools, and components. With Langfuse integration enabled, you can:

  • Track token usage and costs across Anthropic, OpenAI, and Google models
  • Monitor generation latency to identify slow operations
  • View complete traces of multi-file code generation
  • Analyze placeholder optimization impact on token savings
  • Debug failed generations with full context

Setup

1. Create a Langfuse Account

Sign up for a free account at cloud.langfuse.com (EU region) or us.cloud.langfuse.com (US region).

2. Get API Keys

From your Langfuse dashboard:

  1. Navigate to SettingsAPI Keys
  2. Create a new API key pair
  3. Copy both the Secret Key (sk-lf-...) and Public Key (pk-lf-...)

3. Configure Environment Variables

Add these variables to your .env file in your project root:

# Enable Langfuse tracing
LANGFUSE_ENABLED=true

# Your Langfuse credentials
LANGFUSE_SECRET_KEY=sk-lf-your-secret-key-here
LANGFUSE_PUBLIC_KEY=pk-lf-your-public-key-here

# Langfuse API URL (defaults to EU cloud)
LANGFUSE_BASEURL=https://cloud.langfuse.com  # or https://us.cloud.langfuse.com for US

# Your LLM provider API keys (at least one required)
ANTHROPIC_API_KEY=your-anthropic-key
OPENAI_API_KEY=your-openai-key
GOOGLE_API_KEY=your-google-key

4. Run Pull Command

Now when you run inkeep pull, all LLM operations will be traced to Langfuse:

inkeep pull --project my-agent-project

Viewing Traces

In Langfuse Dashboard

  1. Go to your Langfuse dashboard
  2. Navigate to Traces
  3. You'll see traces for each file generation operation

Trace Metadata

Each trace includes rich metadata:

FieldDescriptionExample
fileTypeType of file being generatedagent, tool, data_component
placeholderCountNumber of placeholders used5
promptSizeSize of prompt in characters15234
modelLLM model usedclaude-sonnet-4-5

Example Trace Structure

Service: inkeep-agents-cli
├── generate-agent-file
│   ├── Model: claude-sonnet-4-5
│   ├── Tokens: 12,543 input / 3,421 output
│   ├── Duration: 8.3s
│   └── Metadata:
│       ├── fileType: agent
│       ├── placeholderCount: 12
│       └── promptSize: 25,891 chars

Monitoring Strategies

Track Costs by Provider

Compare costs across different LLM providers:

  1. Filter traces by model in Langfuse
  2. View cumulative costs in the Usage dashboard
  3. Identify cost-saving opportunities

Optimize Generation Time

Find slow generation steps:

  1. Sort traces by duration
  2. Check if complex agents need longer timeouts
  3. Consider using faster models for simpler files

Analyze Token Savings

Monitor placeholder optimization impact:

  1. Look at placeholderCount metadata
  2. Higher counts = more token savings
  3. Useful for understanding efficiency gains

Troubleshooting

Traces Not Appearing

Check if Langfuse is enabled:

# Should output: true
echo $LANGFUSE_ENABLED

Verify API keys are set:

# Should show your keys (without exposing them)
env | grep LANGFUSE

Check for errors:

# Run with debug logging
DEBUG=* inkeep pull --project my-project

Missing Metadata

If traces appear but lack metadata:

  1. Ensure you're using the latest CLI version
  2. Check that file type context is being passed correctly
  3. Report issues on GitHub

Privacy Considerations

What Data is Sent to Langfuse

  • Prompt content: The full prompts sent to LLMs (includes your project data)
  • Generated code: The TypeScript code generated by LLMs
  • Model metadata: Model names, token counts, timings
  • File metadata: File types, sizes, placeholder counts

What is NOT Sent

  • Your API keys: LLM provider keys are never sent to Langfuse
  • Other environment variables: Only Langfuse-specific vars are used

Self-Hosted Option

For complete control over your data, you can self-host Langfuse:

# Use your self-hosted instance
LANGFUSE_BASEURL=https://langfuse.your-domain.com

See Langfuse self-hosting docs for details.

Best Practices

  1. Enable for development: Keep tracing on during development to catch issues early
  2. Disable in CI/CD: Turn off for automated builds to avoid unnecessary traces
  3. Review weekly: Check Langfuse dashboard weekly to monitor costs and performance
  4. Set budgets: Configure spending alerts in your LLM provider dashboards