Show detailed request/response data for debugging API calls and LLM processing
When enabled, debug mode displays:
- Request JSON sent to backend
- Raw LLM response before processing
- Processed response after parsing
- API endpoint and timing information
Configure API keys, models, and system prompts
Show detailed request/response data for debugging API calls and LLM processing
When enabled, debug mode displays:
When enabled, the analysis will pause after Detect, Debate, and Design phases, allowing you to chat with the output before proceeding to the next phase. All conversation context is passed to subsequent phases.
Choose how debate arena is displayed
Intelligent proxy automatically selects optimal model based on task complexity
Falls back to cheaper models if primary model fails
Provides access to multiple LLM providers
Your personal API key for accessing TruthForge programmatically or via email. Keep this key secure and do not share it publicly.
Send narratives to your unique TruthForge email address for analysis. Simply email your content and receive analysis results back.
OpenAI-compatible streaming endpoint for chat
Leave empty to use TruthForge API
Model used for interactive chat
Maximum width of chat interface (600-3000px, 0 for full width)
Remove saved analysis results and chat history from browser storage
Used for all analysis steps unless overridden
Upload markdown playbooks to guide design pillar response generation
No playbooks uploaded yet
Automatically applied to new analysis sessions