Quickstart Guide
Get TalonAI integrated into your application in under 5 minutes.
Prerequisites
- Node.js 18+ or Python 3.8+
- A TalonAI API key (get one at app.talonai.io)
- An existing LLM integration (OpenAI, Anthropic, etc.)
Installation
Install the TalonAI SDK for your language:
npm install @talonai/sdk
Basic Usage
The simplest way to use TalonAI is to analyze content before sending it to your LLM:
app.ts
1import TalonAI from '@talonai/sdk';23// Initialize with your API key4const talon = new TalonAI('sk_live_your_api_key');56// Analyze user input7async function processUserMessage(message: string) {8 const analysis = await talon.analyze(message);910 if (!analysis.allowed) {11 console.log('Blocked:', analysis.threats);12 return { error: 'Message blocked for security reasons' };13 }1415 // Safe to proceed with your LLM call16 // ... your OpenAI/Anthropic code here17}
API Key Security
Never expose your API key in client-side code. Always make TalonAI calls from your backend server.
Integration with OpenAI
Here's a complete example with OpenAI:
chat-with-openai.ts
1import TalonAI from '@talonai/sdk';2import OpenAI from 'openai';34const talon = new TalonAI('sk_live_...');5const openai = new OpenAI();67async function chat(userMessage: string) {8 // 1. Analyze the input9 const inputAnalysis = await talon.analyze(userMessage);1011 if (!inputAnalysis.allowed) {12 return {13 error: 'Your message was blocked for security reasons',14 riskScore: inputAnalysis.riskScore,15 };16 }1718 // 2. Make the LLM call19 const response = await openai.chat.completions.create({20 model: 'gpt-4',21 messages: [22 { role: 'system', content: 'You are a helpful assistant.' },23 { role: 'user', content: userMessage },24 ],25 });2627 const assistantMessage = response.choices[0].message.content;2829 // 3. Optionally analyze the output30 const outputAnalysis = await talon.analyze(assistantMessage);3132 if (!outputAnalysis.allowed) {33 return {34 error: 'Response blocked due to policy violation',35 riskScore: outputAnalysis.riskScore,36 };37 }3839 return { message: assistantMessage };40}
Using the Protect Endpoint
The protect endpoint combines analysis with automatic content sanitization:
protect-example.ts
1const result = await talon.protect(userMessage);23// result.allowed - whether the content passed security checks4// result.sanitizedContent - cleaned content with PII redacted5// result.analysis - full security analysis67if (result.allowed) {8 // Use result.sanitizedContent instead of original message9 const response = await openai.chat.completions.create({10 model: 'gpt-4',11 messages: [{ role: 'user', content: result.sanitizedContent }],12 });13}
Configuration Options
Customize TalonAI behavior with configuration options:
config.ts
1const talon = new TalonAI('sk_live_...', {2 // Risk threshold (0-100). Higher = more permissive3 riskThreshold: 70,45 // Categories to detect6 categories: [7 'PROMPT_INJECTION',8 'JAILBREAK',9 'PII_EXPOSURE',10 'DATA_LEAKAGE',11 'TOXIC_CONTENT',12 ],1314 // Enable detailed explanations15 detailed: true,1617 // Custom timeout (ms)18 timeout: 5000,19});
Error Handling
Always handle potential errors gracefully:
error-handling.ts
1import TalonAI, { TalonAIError, RateLimitError } from '@talonai/sdk';23try {4 const analysis = await talon.analyze(message);5 // ...6} catch (error) {7 if (error instanceof RateLimitError) {8 // Wait and retry9 await sleep(error.retryAfter * 1000);10 return processUserMessage(message);11 }1213 if (error instanceof TalonAIError) {14 console.error('TalonAI error:', error.message);15 // Decide: fail open or fail closed16 return failClosed ? { error: 'Security check failed' } : { message };17 }1819 throw error;20}
Fail Open vs Fail Closed
Decide your error handling strategy carefully. "Fail open" allows requests through when TalonAI is unavailable, while "fail closed" blocks them. For high-security applications, prefer fail closed.