Complete guide to token optimization with TOON format.
import { countTokensInText } from '@programsmagic/toon-tokenizer';
const result = countTokensInText('Hello, world!', 'gpt-4');
console.log(result.tokens);
console.log(result.estimatedCost);
import { countTokensInData } from '@programsmagic/toon-tokenizer';
const data = { user: { id: 123, name: 'Ada' } };
const result = countTokensInData(data, 'gpt-4');
console.log(result.tokens);
import { compareAllFormats } from '@programsmagic/toon-converter';
const data = { users: [...] };
const comparison = compareAllFormats(data);
console.log(comparison.best); // Recommended format
console.log(comparison.savings); // Savings by format
Arrays of objects are perfect for TOON:
const data = [
{ id: 1, name: 'Alice' },
{ id: 2, name: 'Bob' }
];
// TOON: 30-60% token reduction
Use ultra-minimization for LLM prompts:
const toon = encodeToon(data, { minimize: true });
// Strips all whitespace
Let the system choose the best format:
import { selectOptimalFormat } from '@programsmagic/toon-converter';
const selection = selectOptimalFormat(data);
// Returns recommended format with alternatives
Analyze which fields consume the most tokens:
import { analyzeTokensPerField } from '@programsmagic/toon-tokenizer';
const analysis = analyzeTokensPerField(data, 'gpt-4');
console.log(analysis.topFields); // Top N most expensive fields
MIT