Settings

Configure API keys, models, and system prompts

Debug Mode

Show detailed request/response data for debugging API calls and LLM processing

When enabled, debug mode displays:

  • Request JSON sent to backend
  • Raw LLM response before processing
  • Processed response after parsing
  • API endpoint and timing information

Phase Checkpoints

When enabled, the analysis will pause after Detect, Debate, and Design phases, allowing you to chat with the output before proceeding to the next phase. All conversation context is passed to subsequent phases.

Debate Display Mode

Choose how debate arena is displayed

Model Proxy

Intelligent proxy automatically selects optimal model based on task complexity

Falls back to cheaper models if primary model fails

Balanced Mode: GPT-4
~2.5s $0.015/request

API Keys

Provides access to multiple LLM providers

Your TruthForge API Key

Your personal API key for accessing TruthForge programmatically or via email. Keep this key secure and do not share it publicly.

Loading API key...

Email Integration

Send narratives to your unique TruthForge email address for analysis. Simply email your content and receive analysis results back.

Loading email address...

Chat Interface

OpenAI-compatible streaming endpoint for chat

Leave empty to use TruthForge API

Model used for interactive chat

Maximum width of chat interface (600-3000px, 0 for full width)

Remove saved analysis results and chat history from browser storage

Default Provider & Model

Used for all analysis steps unless overridden

Role-Specific Models

Workflow Step Models

System Prompts

Strategy Playbooks

Upload markdown playbooks to guide design pillar response generation

No playbooks uploaded yet

Automatically applied to new analysis sessions