Skip to main content

Parameters Reference

Guide to AI model configuration parameters.


Current Implementation

The SF Explorer AI integration currently uses default parameters for all AI calls. Model selection is the primary way to control output behavior.

Supported Parameters

ParameterStatusNotes
Model Selection✅ SupportedChoose model via Settings or API
Prompt✅ SupportedYour input text
Temperature🔜 PlannedNot yet configurable
Max Tokens🔜 PlannedNot yet configurable
Top P🔜 PlannedNot yet configurable

Model Selection (Primary Control)

Since advanced parameters aren't yet configurable, choose the right model for your use case:

Use CaseRecommended ModelWhy
Quick answersGPT-4o Mini, Claude 3 HaikuFast, cost-effective
Complex reasoningGPT-4.1, Claude Sonnet 4Better accuracy
Code generationGPT-4o, GPT-4.1Optimized for code
Creative contentGPT-5, Claude Sonnet 4.5Higher creativity

Parameter Concepts (Reference)

Understanding these concepts helps when parameters become configurable:

Temperature (0.0 - 2.0)

Controls randomness/creativity:

0.0 ━━━━━━━━━━━ 0.7 ━━━━━━━━━━━ 2.0
Deterministic Balanced Creative
  • 0.0 - 0.3: Facts, data extraction
  • 0.4 - 0.9: General use
  • 1.0 - 2.0: Creative writing

Max Tokens

Limits response length:

  • 1 token ≈ 4 characters ≈ 0.75 words
  • 100 tokens ≈ 75 words
SettingTokensUse Case
Short100-300Quick answers
Medium300-800Standard responses
Long800-2000Detailed explanations

Top P (Nucleus Sampling)

Controls token selection diversity (0.0 - 1.0):

  • 1.0: Consider all tokens (default)
  • 0.9: Top 90% probability mass
  • 0.5: More focused responses


Last Updated: January 2026