AI-powered fact-checking and bullshit detection for Node.js applications. A generic package for any project requiring LLM-based fact verification.
This package provides an efficient way to detect misinformation in text using OpenAI's language models. It extracts multiple factual claims from input text and evaluates each one in a single LLM call for maximum efficiency. Perfect for real-time applications, content moderation, or any application that needs automated fact-checking.
- Multi-Statement Detection: Extracts and evaluates ALL factual claims in text, not just one
- Single Function API: Simple
detectBullshit(input, config?)
function - Dual Input Support: Accepts either plain strings or OpenAI-formatted message arrays
- Single LLM Call: Extracts facts and evaluates them in one efficient operation
- Configurable: Customize OpenAI model, temperature, and token limits
- Structured Output: Returns consistent JSON arrays with bullshit levels, confidence scores, and reasoning
- TypeScript Support: Full TypeScript types and interfaces
- Generic Design: Works with any Node.js project, not tied to specific use cases
npm install @josheverett/bullshit-detector
Set your OpenAI API key as an environment variable:
export OPENAI_API_KEY="your-openai-api-key"
import { detectBullshit } from '@josheverett/bullshit-detector';
const result = await detectBullshit("The Earth has 27 billion people and the moon is made of cheese.");
console.log(result);
// [
// {
// transcript: "The Earth has 27 billion people and the moon is made of cheese.",
// claim: "The Earth has 27 billion people",
// summary: "Claims about Earth's population and moon's composition",
// bullshitLevel: 5,
// confidence: 5,
// reasoning: "The actual population is approximately 8 billion, not 27 billion",
// truth: "The Earth has approximately 8 billion people"
// },
// {
// transcript: "The Earth has 27 billion people and the moon is made of cheese.",
// claim: "The moon is made of cheese",
// summary: "Claims about Earth's population and moon's composition",
// bullshitLevel: 5,
// confidence: 5,
// reasoning: "The moon is composed primarily of rock and dust, not cheese",
// truth: "The moon is a rocky celestial body composed mainly of silicate minerals"
// }
// ]
import { detectBullshit, BullshitDetectionConfig } from '@josheverett/bullshit-detector';
const config: BullshitDetectionConfig = {
model: 'gpt-4.1-2025-04-14', // Use the latest model
temperature: 1, // Model default temperature
maxTokens: 2000 // Allow longer responses
};
const result = await detectBullshit("Some complex text to analyze", config);
import { detectBullshit } from '@josheverett/bullshit-detector';
const messages = [
{ role: 'user', content: 'I read that vaccines contain microchips for tracking' },
{ role: 'assistant', content: 'That is not accurate. Can you tell me more about where you heard this?' },
{ role: 'user', content: 'Well, I saw it on social media. Also, did you know the moon landing was faked?' }
];
const results = await detectBullshit(messages);
// Analyzes the most recent user message for all factual claims
Main function for bullshit detection.
Parameters:
input: string | OpenAIMessage[]
- Either a text string or array of OpenAI-formatted messagesconfig?: BullshitDetectionConfig
- Optional configuration object
Returns: Promise<BullshitDetectionResult[]>
- Array of detection results, one per factual claim
interface BullshitDetectionResult {
transcript: string; // The input text that was analyzed
claim: string; // The specific factual statement evaluated
summary: string; // A concise summary of the input
bullshitLevel: number; // 0-5 scale (0 = no bullshit, 5 = maximum bullshit)
confidence: number; // 0-5 scale confidence in the evaluation
reasoning: string; // Explanation of why this level was assigned
truth: string; // The accurate facts or corrected information
}
interface BullshitDetectionConfig {
model?: string; // OpenAI model to use (default: 'gpt-4.1-2025-04-14')
temperature?: number; // Temperature for LLM calls (default: 0)
maxTokens?: number; // Maximum tokens in response (default: 1500)
}
interface OpenAIMessage {
role: 'system' | 'user' | 'assistant';
content: string;
}
For applications that prefer a class-based approach, the original interface is still available:
import { BullshitDetector } from '@josheverett/bullshit-detector';
const detector = new BullshitDetector();
const evaluations = await detector.analyzeTranscript("Some text to analyze");
// Returns StatementEvaluation[] (similar to BullshitDetectionResult but without transcript/summary)
The function throws descriptive errors for common issues:
try {
const results = await detectBullshit("Some text");
} catch (error) {
if (error.message.includes('OPENAI_API_KEY')) {
console.error('Please set your OpenAI API key');
} else if (error.message.includes('No factual claims found')) {
console.log('Input contains no verifiable factual statements');
} else {
console.error('Detection failed:', error.message);
}
}
Perfect for real-time fact-checking in various applications:
// In a real-time pipeline
const transcriptResults = await detectBullshit(userInput);
if (transcriptResults.some(r => r.bullshitLevel > 3)) {
// Handle misinformation appropriately
}
Analyze user-generated content for factual accuracy:
const contentResults = await detectBullshit(userPost);
contentResults.forEach(result => {
if (result.bullshitLevel >= 4 && result.confidence >= 4) {
flagForReview(result);
}
});
Help students identify misinformation in texts:
const analysisResults = await detectBullshit(studentEssay);
// Show corrections and reasoning to help learning
MIT