Skip to content

AI Test Summary

Shiplight AI automatically generates intelligent summaries of failed test results using advanced AI analysis. This feature helps you quickly understand what went wrong, identify root causes, and get actionable recommendations for fixing test failures.

Table of Contents

  1. Overview
  2. Accessing AI Summaries
  3. What's Included in AI Summaries
  4. Visual Analysis
  5. Understanding the Summary
  6. Working with Summaries
  7. Best Practices
  8. Troubleshooting

Overview

AI Test Summary uses advanced AI to analyze failed test results and provide:

  • Instant Root Cause Identification - AI analyzes test steps, errors, and screenshots to pinpoint exactly what failed
  • Human-Readable Explanations - Technical details translated into clear, actionable descriptions
  • Visual Context Analysis - Screenshots are analyzed to identify UI issues, layout problems, and visual regressions
  • Smart Categorization - Automatic tagging helps you categorize and track different types of failures
  • Time Savings - Skip manual investigation and get straight to the root cause

Accessing AI Summaries

For failed test results, AI Summaries are automatically generated when you first view the test details:

  1. Navigate to Results in the main menu
  2. Click on a run with failed tests
  3. Click on a failed test to open the details modal
  4. The AI Summary panel appears in the Overview tab
  5. The summary generates automatically on first view

The summary is cached, so subsequent views load instantly without regeneration.

What's Included in AI Summaries

The AI Summary provides a comprehensive text analysis that includes:

Root Cause Analysis:

  • Primary reason for the failure
  • Contributing factors
  • Technical details about the error

Expected vs Actual Behavior:

  • What the test was trying to accomplish
  • What actually happened
  • The discrepancy between them

Relevant Context:

  • Previous steps that may have contributed
  • State of the application at failure point
  • Environment or timing considerations

Recommendations:

  • Suggested fixes for the issue
  • Potential test improvements
  • Follow-up actions to investigate

What You'll See:

  • Displayed in a scrollable text area within the AI Summary panel
  • Accessible immediately when viewing failed test results
  • Plain text format with preserved line breaks and structure

Visual Analysis

When a screenshot is available at the point of failure, AI performs multimodal analysis:

What AI Can Detect in Screenshots:

  • Missing UI Elements - Buttons, forms, or components that should be visible
  • Layout Problems - Misaligned elements, overlapping content, responsive issues
  • Visual Regressions - Color changes, style differences, visual bugs
  • Loading States - Spinners, skeleton screens, or incomplete page loads
  • Error Messages - Visible error dialogs or alert messages
  • State Indicators - Disabled buttons, selected tabs, active navigation

How It Enhances Analysis:

  • Confirms whether elements are actually visible vs just in the DOM
  • Identifies visual issues that logs alone can't reveal
  • Provides context about page state at failure
  • Helps differentiate between code errors and UI problems

Example Analysis:

Visual Analysis: The screenshot shows the "Submit Order" button is present
but appears disabled (grayed out). The shopping cart total displays $0.00,
suggesting the cart is empty, which would explain why the button is disabled.

Understanding the Summary

Root Cause Analysis

The most critical section identifying why the test failed:

Example:

markdown
## Root Cause Analysis

The test failed at step 12 when attempting to click the "Submit Order" button.
The element could not be found on the page because:

1. The button's CSS selector changed from `.submit-btn` to `.order-submit-btn`
2. The previous step (adding item to cart) appears to have failed silently
3. The page redirected to an error state before the button could be clicked

The primary cause is a selector change in the application code.

Expected vs Actual Behavior

Clarifies what should have happened versus what did:

Example:

markdown
## Expected vs Actual Behavior

**Expected:**

- Button with text "Submit Order" should be visible and clickable
- Clicking should proceed to payment page
- Cart should contain 1 item with total of $99.99

**Actual:**

- Button not found in DOM
- Page shows "Your cart is empty" message
- Cart total displays $0.00

Recommendations

Actionable suggestions for resolution:

Example:

markdown
## Recommendations

1. **Immediate Fix:** Update the selector in step 12 to use `.order-submit-btn`
2. **Investigate:** Check why the "Add to Cart" action in step 10 didn't persist
3. **Enhance Test:** Add an assertion after step 10 to verify cart contains items
4. **Consider:** Use AI Mode for this step to automatically adapt to selector changes

Working with Summaries

Viewing Summaries

AI Summaries appear in the test result details:

Overview Tab - AI Summary Panel

  • Appears as a collapsible panel in the test result modal
  • Shows detailed analysis of the test failure
  • Automatically expands when summary is available
  • Click the panel header to collapse/expand

Sharing Summaries

AI Summaries can be shared with your team:

  • Direct Link - Share the URL to the test result, which includes the AI summary
  • Copy Text - Select and copy the description text from the AI Summary panel
  • Issue Tracking - Reference the summary when creating issues for failed tests
  • Team Communication - Paste summary content into Slack, email, or other communication tools

Best Practices

1. Review Summaries Critically

  • AI provides intelligent analysis but isn't infallible
  • Verify recommendations before implementing
  • Use summaries as a starting point for investigation

2. Review Complete Context

  • Read the full description for comprehensive understanding
  • Don't rely solely on automated analysis
  • Look for patterns across multiple test failures

3. Combine with Other Tools

  • Review the AI Summary alongside:
    • Test execution video
    • Playwright trace viewer
    • Detailed logs and console output
  • AI Summary provides the "why", other tools show the "how"

4. Learn from Patterns

  • Notice recurring tags or root causes
  • Use insights to improve test stability
  • Identify application areas that need attention

5. Keep Test Context Rich

  • Clear test names help AI provide better analysis
  • Descriptive step descriptions improve summary quality
  • Well-structured tests yield more accurate summaries

Troubleshooting

Summary Not Generating

If the AI Summary doesn't appear:

  • Check that the test actually failed (summaries only for failures)
  • Ensure you have a stable internet connection
  • Try refreshing the page
  • Contact support if issue persists

Summary Seems Inaccurate

If the analysis doesn't match your understanding:

  • Review the screenshot and logs yourself
  • Cross-reference with test execution video and trace viewer
  • Provide feedback to your team admin
  • Use the summary as one data point, not the only source

Missing Visual Analysis

If no screenshot analysis is included:

  • Check if a screenshot was captured at failure point
  • Screenshots may not be available for all failure types
  • Verify test configuration includes screenshot capture
  • Some failures occur before screenshot can be taken

Slow Generation

If summary generation is taking a long time:

  • Wait up to 30 seconds for complex analyses
  • Check your internet connection stability
  • Large screenshots may take longer to process
  • Refresh the page if it times out

Released under the MIT License.