Quickstart Guide

Get TalonAI integrated into your application in under 5 minutes.

Prerequisites

  • Node.js 18+ or Python 3.8+
  • A TalonAI API key (get one at app.talonai.io)
  • An existing LLM integration (OpenAI, Anthropic, etc.)

Installation

Install the TalonAI SDK for your language:

npm install @talonai/sdk

Basic Usage

The simplest way to use TalonAI is to analyze content before sending it to your LLM:

app.ts
1import TalonAI from '@talonai/sdk';
2
3// Initialize with your API key
4const talon = new TalonAI('sk_live_your_api_key');
5
6// Analyze user input
7async function processUserMessage(message: string) {
8 const analysis = await talon.analyze(message);
9
10 if (!analysis.allowed) {
11 console.log('Blocked:', analysis.threats);
12 return { error: 'Message blocked for security reasons' };
13 }
14
15 // Safe to proceed with your LLM call
16 // ... your OpenAI/Anthropic code here
17}

API Key Security

Never expose your API key in client-side code. Always make TalonAI calls from your backend server.

Integration with OpenAI

Here's a complete example with OpenAI:

chat-with-openai.ts
1import TalonAI from '@talonai/sdk';
2import OpenAI from 'openai';
3
4const talon = new TalonAI('sk_live_...');
5const openai = new OpenAI();
6
7async function chat(userMessage: string) {
8 // 1. Analyze the input
9 const inputAnalysis = await talon.analyze(userMessage);
10
11 if (!inputAnalysis.allowed) {
12 return {
13 error: 'Your message was blocked for security reasons',
14 riskScore: inputAnalysis.riskScore,
15 };
16 }
17
18 // 2. Make the LLM call
19 const response = await openai.chat.completions.create({
20 model: 'gpt-4',
21 messages: [
22 { role: 'system', content: 'You are a helpful assistant.' },
23 { role: 'user', content: userMessage },
24 ],
25 });
26
27 const assistantMessage = response.choices[0].message.content;
28
29 // 3. Optionally analyze the output
30 const outputAnalysis = await talon.analyze(assistantMessage);
31
32 if (!outputAnalysis.allowed) {
33 return {
34 error: 'Response blocked due to policy violation',
35 riskScore: outputAnalysis.riskScore,
36 };
37 }
38
39 return { message: assistantMessage };
40}

Using the Protect Endpoint

The protect endpoint combines analysis with automatic content sanitization:

protect-example.ts
1const result = await talon.protect(userMessage);
2
3// result.allowed - whether the content passed security checks
4// result.sanitizedContent - cleaned content with PII redacted
5// result.analysis - full security analysis
6
7if (result.allowed) {
8 // Use result.sanitizedContent instead of original message
9 const response = await openai.chat.completions.create({
10 model: 'gpt-4',
11 messages: [{ role: 'user', content: result.sanitizedContent }],
12 });
13}

Configuration Options

Customize TalonAI behavior with configuration options:

config.ts
1const talon = new TalonAI('sk_live_...', {
2 // Risk threshold (0-100). Higher = more permissive
3 riskThreshold: 70,
4
5 // Categories to detect
6 categories: [
7 'PROMPT_INJECTION',
8 'JAILBREAK',
9 'PII_EXPOSURE',
10 'DATA_LEAKAGE',
11 'TOXIC_CONTENT',
12 ],
13
14 // Enable detailed explanations
15 detailed: true,
16
17 // Custom timeout (ms)
18 timeout: 5000,
19});

Error Handling

Always handle potential errors gracefully:

error-handling.ts
1import TalonAI, { TalonAIError, RateLimitError } from '@talonai/sdk';
2
3try {
4 const analysis = await talon.analyze(message);
5 // ...
6} catch (error) {
7 if (error instanceof RateLimitError) {
8 // Wait and retry
9 await sleep(error.retryAfter * 1000);
10 return processUserMessage(message);
11 }
12
13 if (error instanceof TalonAIError) {
14 console.error('TalonAI error:', error.message);
15 // Decide: fail open or fail closed
16 return failClosed ? { error: 'Security check failed' } : { message };
17 }
18
19 throw error;
20}

Fail Open vs Fail Closed

Decide your error handling strategy carefully. "Fail open" allows requests through when TalonAI is unavailable, while "fail closed" blocks them. For high-security applications, prefer fail closed.

Next Steps