The OpenAI SDK client instance to wrap with tracing
Optional
langfuseConfig: LangfuseConfigOptional configuration for tracing behavior
A proxied version of the OpenAI SDK with automatic tracing
import OpenAI from 'openai';
import { observeOpenAI } from '@langfuse/openai';
const openai = observeOpenAI(new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
}));
// All OpenAI calls are now automatically traced
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }],
max_tokens: 100,
temperature: 0.7
});
// With custom tracing configuration
const openai = observeOpenAI(new OpenAI({
apiKey: process.env.OPENAI_API_KEY
}), {
traceName: 'AI-Assistant-Chat',
userId: 'user-123',
sessionId: 'session-456',
tags: ['production', 'chat-feature'],
generationName: 'gpt-4-chat-completion'
});
const completion = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Explain quantum computing' }]
});
// Streaming responses are also automatically traced
const stream = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Write a story' }],
stream: true
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
// Final usage details and complete output are captured automatically
Wraps an OpenAI SDK client with automatic Langfuse tracing.
This function creates a proxy around the OpenAI SDK that automatically traces all method calls, capturing detailed information about requests, responses, token usage, costs, and performance metrics. It works with both streaming and non-streaming OpenAI API calls.
The wrapper recursively traces nested objects in the OpenAI SDK, ensuring that all API calls (chat completions, embeddings, fine-tuning, etc.) are automatically captured as Langfuse generations.