Request Replay & Debugging Tools
Advanced debugging and testing features for analyzing and replaying GenAI requests.

The Problem
Debugging and optimizing AI responses requires the ability to replay requests, experiment with variations, and compare results.
Teams need to:
- 🔄 Replay Historical Requests: Re-run past conversations to verify fixes or test improvements
- 🔬 Experiment Safely: Test prompt variations without affecting production
- 📊 Compare Results: See how different prompts or models respond to the same input
- 🎯 Optimize Iteratively: Refine prompts based on actual results
- 🐛 Debug Systematically: Isolate which part of a request caused an issue
- ⚡ Speed Up Testing: Validate changes without waiting for real user traffic
In short: You need a testing lab to iterate on AI requests and validate improvements before deployment.
How GenAI Explorer Solves This
GenAI Explorer provides complete request replay and debugging with:
✅ Request Replay: Replay any historical GenAI request exactly as it was
- Same prompt, same model, same parameters
- Compare original vs new responses side-by-side
- Verify bug fixes instantly
✅ Prompt Editing: Edit prompts before replaying to test variations
- A/B test different approaches
- Optimize for clarity and efficiency
- Fix issues quickly without deployment
✅ AI-Powered Optimization: Use AI to suggest prompt improvements
- Einstein analyzes your prompt for clarity, specificity, structure, and token efficiency
- Get improved version automatically
- Learn best practices through suggestions
✅ Debug Query Builder: See exact SQL queries being executed
- Copy queries to run in Data Cloud
- Understand data flow completely
- Troubleshoot data issues efficiently
✅ Side-by-Side Comparison: View original vs replay results with metrics
- Response quality differences
- Token usage changes
- Processing time comparison
Impact: Reduce debugging time by 80%, optimize prompts with confidence, and iterate 10x faster on AI improvements.
Overview
GenAI Explorer provides powerful debugging tools that let you inspect, replay, and optimize AI requests in real-time. These features are essential for troubleshooting, performance optimization, and prompt engineering.
Key Features
🔁 Request Replay
Replay any historical GenAI request to:
- Test prompt changes without affecting production
- Compare model outputs over time
- Verify bug fixes and improvements
- Train and validate prompts
✏️ Prompt Editing
Edit prompts before replaying to:
- Fix issues quickly
- Test variations A/B style
- Optimize for token efficiency
- Improve response quality
🤖 AI-Powered Prompt Optimization
Use AI to suggest prompt improvements automatically considering:
- Clarity and specificity
- Structure and formatting
- Context and examples
- Token efficiency
🔍 Debug Query Builder
See the exact SQL queries being executed for:
- Conversations and interactions
- Messages and steps
- Session data
- Performance metrics
Request Replay Feature
How It Works
Accessing Historical Requests
Step 1: Navigate to Data Cloud Queries
- Open Atlas Reasoning Engine
- Click Query Lab tab
- Select "Recent Requests Tracking" query
- Execute to see recent GenAI requests
Step 2: Select a Request
- Click on any request from the results
- View complete details:
- Original prompt (or masked prompt)
- Model and parameters
- Token usage
- Safety scores
- Timestamp
Step 3: Replay Controls You'll see three action buttons:
┌─────────────────────────────────────────┐
│ [▶️ Replay] [✏️ Edit] [🤖 Suggest] │
└─────────────────────────────────────────┘
Button Actions
1. Replay Button (▶️)
Purpose: Replay the request exactly as it was
When to Use:
- Verify if a bug still exists
- Compare current model behavior vs historical
- Test infrastructure changes
- Validate fixes
What Happens:
Result:
- Original response (from history)
- New response (from replay)
- Comparison highlighting differences
2. Edit Button (✏️)
Purpose: Modify the prompt before replaying
When to Use:
- Test prompt variations
- Fix known issues
- Add context or constraints
- Optimize for better results
What Happens:
- Prompt becomes editable
- Make your changes
- Click Replay to test
- Click Undo to revert
Example Workflow:
Original Prompt:
"Tell me about this customer"
Edited Prompt:
"Provide a concise summary of this customer including:
- Account status
- Recent orders (last 30 days)
- Open cases
- Renewal date"
→ Click Replay to test improved prompt
3. Suggest Button (🤖)
Purpose: Use AI to suggest prompt improvements
When to Use:
- Prompt is unclear or vague
- Want to optimize token usage
- Need better structure
- Seeking best practices
What Happens:
Meta-Prompt Used:
You are an AI prompt engineering expert. Analyze the following prompt
and suggest improvements to make it clearer, more specific, and more
effective. Consider:
- Clarity and specificity
- Structure and formatting
- Context and examples
- Token efficiency
Original prompt:
"""
[Your prompt here]
"""
Provide an improved version of the prompt. Only return the improved
prompt text, without explanations.
Example Transformation:
Before (Vague):
Help with order
After AI Suggestion (Specific):
Please provide the following information about the customer's order:
1. Order Status: Current state (e.g., Processing, Shipped, Delivered)
2. Tracking Number: If available and order has shipped
3. Estimated Delivery: Expected delivery date and time window
4. Items Ordered: List of products with quantities
5. Order Total: Final amount including tax and shipping
Format the response in a clear, customer-friendly manner.
Comparison View
After replaying, see a side-by-side comparison:
┌─────────────────────────────────────────────────────────────┐
│ Original Response │ New Response │
│ (Historical) │ (Replayed) │
├────────────────────────────┼────────────────────────────────┤
│ Your order is on the way │ Your order #12345 is currently │
│ │ in transit. Tracking: 1Z999AA │
│ │ Estimated delivery: Friday │
└────────────────────────────┴────────────────────────────────┘
📊 Metrics:
Original: 250 tokens | 1.2s
New: 280 tokens | 1.1s
🎯 Changes Detected:
✓ More specific information provided
✓ Tracking number included
✓ Delivery estimate added
⚠️ Token usage increased by 12%
Debug Query Builder
Overview
See the exact SQL being executed behind the scenes for complete transparency and debugging.
Accessing Debug Queries
In Any Session View:
- Look for the "🔽 Hide Debug Queries" button at the top
- Click to toggle query visibility
- See queries for current tab (Overview, Interactions, Messages, Steps)
Debug Panel Components: