Before going live with your AI agent, we recommend the following preparation steps to ensure a smooth, high-quality test run.
1. Review Your Ticket Flow
When a new ticket is created in your CX platform, the AI follows a structured flow from classification → interception → resolution or escalation. Understanding this flow helps your team know exactly what to expect.
Step 1: Use Case Classification — Determine Whether to Skip or Intercept
When a ticket is created, our system will classify the conversation by use case to determine whether it's to intercept or skip.
When use case is enabled → AI intercepts
When use case is disabled → AI skips
The conversation will also be skipped if there are any email bypass configured.
Step 2: Interception — Actively Assisting the Customer
If intercepted, the AI begins reading and responding to the customer using your brand’s SOPs. The conversation is tagged with shipped-ai-intercepted .
The AI will continue handling the conversation until it reaches a resolution or requires escalation.
Step 3: Resolution or Escalation
Resolved
If the AI successfully assisted the customer or conversation ended due to inactivity, the following tag is added:
shipped-ai-resolved
This indicates the conversation was fully resolved by the AI.
Escalated
If the AI determines it cannot assist the customer further or is instructed to escalate, it escalates the ticket to your team and applies:
shipped-ai-escalated
Upon escalation:
The AI agent will be unassigned from the conversation
The ticket will be moved to "Open" status
An escalation summary will be added to the conversation as an internal note
2. Understand How Use Case Changes Trigger Escalations
During a conversation, the customer's intent (use case) may shift.
If the new use case is not currently enabled in your AI configuration:
➡️ The system will automatically escalate the conversation to your team.
This prevents the AI from responding outside of its approved coverage and helps maintain quality and compliance. We recommend reviewing your enabled use cases prior to testing so your team knows which use case is to be handled by the AI.
3. Become Familiar With AI Tags
Our AI automatically applies specific tags to help your team understand behavior and monitor performance. Common tags include:
shipped-ai-intercepted– AI is actively interacting with the customer.shipped-ai-resolved– AI has resolved the conversation.shipped-ai-escalated– AI has escalated the conversation to a human agent.
These tags make it easier to filter, measure, and QA conversations during the test run.
4. Create Monitoring Views in Your CX Platform
To streamline oversight during the test phase, we recommend setting up dedicated views such as:
All AI-Intercepted Conversations (tag =
shipped-ai-intercepted)If you wish to only include intercepted tickets into this view, you can configure the filter to exclude conversations tagged with
shipped-ai-resolvedandshipped-ai-escalated.AI-Resolutions Only (tag =
shipped-ai-resolved)AI-Escalations (tag =
shipped-ai-escalated)
These views make it far easier for your CX team to monitor conversations touched by the AI and provide actionable feedback.
If you need help building these views in Gorgias, Zendesk, Freshdesk, or Dixa, we can assist.
5. Provide Feedback During the Test Run
Importance of feedback
During testing and onboarding process, your feedback is essential to fine-tune performance:
Flag incorrect responses or missed context.
Share examples of high-quality or preferred responses for consistency.
Identify gaps in coverage (e.g., use cases you’d like to enable).
Note any tone or brand-voice adjustments.
This feedback helps us continuously train and optimize your AI agent.
Two methods to submit feedback
You may choose to submit the feedback in two different methods:
If you need help preparing for your AI test run—whether reviewing ticket flow, configuring tags, or creating monitoring views—please contact support@cavalry.ai.