OpenAI Integration
Protect your OpenAI API calls with TalonAI security.
Proxy Mode (Recommended)
Route your OpenAI requests through TalonAI for automatic protection:
import OpenAI from 'openai';
// Simply change the base URL to TalonAI
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
baseURL: 'https://api.talonai.io/v1/proxy/openai',
defaultHeaders: {
'X-TalonAI-Key': process.env.TALONAI_API_KEY,
},
});
// Use OpenAI as normal - TalonAI automatically protects all requests
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: userInput }],
});SDK Mode
Use TalonAI SDK to analyze before sending to OpenAI:
import { TalonAI } from '@talonai/sdk';
import OpenAI from 'openai';
const talon = new TalonAI();
const openai = new OpenAI();
async function chat(userMessage: string) {
// Analyze input
const inputAnalysis = await talon.analyze({
content: userMessage,
direction: 'input',
});
if (!inputAnalysis.isSafe) {
throw new Error(`Blocked: ${inputAnalysis.blockReason}`);
}
// Call OpenAI
const response = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: userMessage }],
});
const output = response.choices[0].message.content;
// Analyze output
const outputAnalysis = await talon.analyze({
content: output,
direction: 'output',
});
// Redact PII if detected
if (outputAnalysis.pii.detected) {
return talon.redact(output);
}
return output;
}Streaming Support
const stream = await openai.chat.completions.create({
model: 'gpt-4',
messages: [{ role: 'user', content: userInput }],
stream: true,
});
// TalonAI proxy automatically buffers and scans streaming responses
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}Configuration
Configure OpenAI-specific settings in your TalonAI dashboard or via API:
- Model allowlist (restrict which models can be used)
- Token limits per request
- System prompt protection
- Function calling restrictions