Creating a Test with Shiplight AI
This comprehensive guide explains how to create and manage test cases using Shiplight AI's powerful test automation platform. Before proceeding, ensure you have configured your Environment and Test Accounts in Settings.
Table of Contents
- What is a Test Case
- 📝 Creating a Test Case
- 📝 Working with the Test Editor
- Managing Test Configurations
- Running and Debugging Tests
- Best Practices
1. What is a Test Case
A test case in Shiplight AI is an automated sequence of actions and verifications that validate your application's functionality. Each test case consists of:
- Test Flow: A structured series of steps that define what the test should do
- Configuration: Environment and test account settings
- Assertions: Verification points that ensure expected behavior
- Variables: Dynamic data that can be used throughout the test
Test cases can be created manually, generated by AI, or a combination of both approaches.
2. Creating a Test Case
Accessing Test Creation
There are multiple ways to create a new test case:
- Click Create Test button in the top right of the Tests page
- Use the quick create icon (✏️) in the navigation bar
- From Agent Tasks page when working with batch generation
Basic Configuration
When creating a test case, you'll need to configure:
Test Title
- Defaults to "Test Case - [Current Date]"
- Use descriptive names that clearly indicate the test's purpose
- Example: "User Login and Dashboard Navigation"
Starting URL
- Select your Environment from the dropdown
- Enter the URL path (e.g.,
/login,/dashboard) - The system automatically constructs the full URL
⚠️ Note: If no environments appear, configure them in Settings first
Test Account Selection
Choose how test accounts are assigned:
- None: No authentication required
- Any: System randomly selects from available accounts for this environment
- Specific: Manually select one or more test accounts
- If multiple accounts are selected, one will be randomly chosen at test run time
Disable Auto Login
When a test account is selected (Any or Specific), an optional Disable Auto Login checkbox appears:
- Unchecked (default): Test will automatically log in using the selected test account credentials
- Checked: Test starts without automatic login
- Useful when you want to manually control the login process
Platform Support
Shiplight AI currently supports:
Desktop Chrome: Full browser automation for web applications
- Default platform for all web testing
- Complete Playwright automation capabilities
Mobile Chromium on Android: Mobile web testing
- Tests mobile web applications on Android devices
- Uses Chromium browser on Android
- Mobile-specific viewport and touch interactions
Platform selection affects test execution and available interactions. Both the agent login flow and knowledge base are platform-specific since they are tied to the UI, helping the AI generate appropriate tests for each platform.
Test Creation Modes
Shiplight AI offers three creation modes:
1. Single Test Creation (AI-Generated)
Create a single test case using AI:
- Provide a Goal describing what you want to test
- AI automatically generates the test steps
- Best for specific test scenarios with clear objectives
- Quick way to create comprehensive tests
2. Batch Test Creation (AI-Generated)
Generate multiple related tests using a two-step process:
Step 1: Generate Test Descriptions
- Agent Explore: AI agent explores your application and generates test descriptions (title + goal pairs)
- CSV Upload: Upload a CSV file containing test titles and goals
- Review and adjust the generated test descriptions
- Add, remove, or modify test descriptions as needed
Step 2: Batch Test Generation
- Once descriptions are finalized, create all tests in parallel
- Each test uses the same AI agent as Single Test Creation
- All tests are generated simultaneously for efficiency
Best for:
- Comprehensive feature coverage
- Efficient test suite creation
- Exploring new application features
3. Manual Test Creation (Hand-Written)
Write test steps manually with AI assistance:
- Leave the Goal field empty when creating the test
- Manually add each test step in the Test Editor
- Full control over every action and assertion
- Useful when you have very specific test requirements
AI-Powered Generation
Goal Field
Enter a natural language description of what you want to test:
Examples:
- "Complete the checkout process and verify the order confirmation is displayed"
- "Create a new user account as admin, then delete it and verify it's removed from the user list"
- "Apply multiple search filters and verify the results update correctly"
- "Change the quantity in the shopping cart, verify the total price updates, and test that invalid email shows an error message"
Tips:
- Describe the actions to perform, and include verification as part of the flow
- Be specific about the actions and expected outcomes
- Include assertions and verification requirements within the goal description
- Include relevant business logic
- Leave blank to manually create all steps
3. Working with the Test Editor
After creating a test, you'll use the Test Editor to build and refine your test steps. The Test Editor provides a visual interface for creating test flows with support for natural language, code, and various action types.
For comprehensive information about using the Test Editor, see:
- Test Editor Overview - Interface, workflow, and key features
- Actions & Steps - All available action types and how to use them
- AI Features - AI-powered test creation and maintenance
- Debugging Tools - Interactive debugging and troubleshooting
- Variables - Using dynamic values in your tests
- Functions - Creating reusable test logic
4. Managing Test Configurations
Test Status
Control test execution:
- Draft: Development mode, won't run in schedules, but can run in cloud manually
- Active: Ready for automated execution
Multiple Configurations
Run the same test across different setups:
- Navigate to test Settings tab
- Click Add Configuration
- Select environment and accounts
- Save configuration
Use Cases:
- Cross-browser testing
- Multi-environment validation
- Different user role testing
Labels and Organization
Organize tests effectively:
Adding Labels:
- Click + in Labels section
- Enter label name
- Use for filtering and grouping
5. Running and Debugging Tests
After creating your test, you can run it manually for debugging or schedule it for automated execution.
Manual Execution:
- Interactive Debugging - Step through test one action at a time
- Full Test Run - Execute entire test from start to finish
Scheduled Execution:
- Add tests to Test Suites
- Create Test Plans with schedules
- Automatic execution at specified times
For comprehensive information on running tests, see:
- Debugging Tools - Interactive debugging features
- Test Plans & Schedules - Automated execution
- Results - Analyzing test outcomes
6. Best Practices
Test Design
Keep Tests Focused:
- One feature per test
- Clear success criteria
- Minimal dependencies
Use Descriptive Names:
- Action-based titles
- Include feature area
- Indicate expected outcome
Handle Dynamic Content:
- Use locators
- Use aiAction for elements with dynamic IDs
- Add appropriate wait, waitUntil actions
- Add verifications between steps