# Service-Disposition Workbook — Assistant Dispositions Dashboard

The Service-Disposition workbook is the second workbook in the Conversations gallery. It focuses exclusively on Assistant Disposition — the outcome classification that the AI assistant itself assigns to each conversation based on whether it achieved its designed goal. This is fundamentally different from Connection Disposition (Section 18), which reflects how the technical session ended.

The workbook sub-tabs are: Dashboard | Conversations by Assistant Dis... | Conversations by Date per Assi... | Conversations List | Details Page.

<figure><img src="/files/ROZYrCHD7MRI2G52m4E5" alt=""><figcaption><p align="center"><em>Figure 1: Assistant Dispositions Dashboard — Two-Section Overview (03/08/2026 – 04/09/2026)</em></p></figcaption></figure>

### 1. Assistant Disposition — Definition and Values

The Assistant Disposition reflects the AI assistant’s own classification of whether it accomplished its designed purpose for each conversation. This is configured by conversation designers in the assistant flow and is the primary measure of AI assistant quality and effectiveness.

<table data-header-hidden><thead><tr><th width="191.77777099609375" valign="top"></th><th valign="top"></th></tr></thead><tbody><tr><td valign="top">Assistant Disposition</td><td valign="top">Definition</td></tr><tr><td valign="top">Success</td><td valign="top">The assistant successfully completed its designed goal. The conversation followed the intended flow and reached a resolution point. This is the target outcome for all conversations.</td></tr><tr><td valign="top">Failure</td><td valign="top">The assistant explicitly classified the conversation as a failure — the user’s intent could not be resolved, a required goal was not achieved, or the conversation exited via a failure path defined in the flow design.</td></tr><tr><td valign="top">Null</td><td valign="top">No disposition was recorded or assigned. This typically occurs when a conversation ends before reaching a disposition-setting point in the flow — for example, very short conversations that drop immediately, or flows where disposition assignment is not yet configured.</td></tr><tr><td valign="top">AbandonSuccess</td><td valign="top">The user abandoned the conversation (disconnected, navigated away, or timed out) after the assistant had already achieved its primary goal. Technically a success from the AI perspective, but the session ended without a clean closure.</td></tr></tbody></table>

### 2. Section 1 — Conversations by Assistant (Summary Horizontal Bar)

The upper section displays a horizontal bar chart showing the total count of conversations per Assistant Disposition across the entire selected period. From the visible data:

<table data-header-hidden><thead><tr><th valign="top"></th><th valign="top"></th></tr></thead><tbody><tr><td valign="top">Assistant Disposition</td><td valign="top">Conversation Count</td></tr><tr><td valign="top">Success (gold/olive)</td><td valign="top">11</td></tr><tr><td valign="top">Failure (light green)</td><td valign="top">9</td></tr><tr><td valign="top">Null (green)</td><td valign="top">6</td></tr><tr><td valign="top">AbandonSuccess (blue/gray)</td><td valign="top">1</td></tr></tbody></table>

Total conversations: 27 (consistent with the Conversations workbook). The Success rate is 11/27 = 40.7%. The Failure rate is 9/27 = 33.3%. Null accounts for 6/27 = 22.2%, and AbandonSuccess is 1/27 = 3.7%.

⚠ Note: In a staging environment with smoke test assistants and low volumes, a Failure rate of 33% is expected and does not reflect production assistant performance. In production environments with fully configured assistants, Success rates of 70–90%+ are typical targets.

### 3. Section 2 — Conversations by Date per Assistant (Time-Series)

The lower section displays a multi-line time-series chart showing daily conversation volumes broken down by individual assistant name. Each line represents a different assistant, color-coded to distinguish between them.

From the visible data: Two assistant lines are prominent. The gold/olive line peaks sharply at 5 conversations on Wed 3/25/26, then drops to 2 by Thu 3/26/26 and stabilizes at 2 through 4/1/26. The green line starts at 2 on 3/11/26, remains low (around 1) through most of the period, then rises to 4 on Wed 4/1/26. The two lines show an inverse relationship in the latter part of the period, suggesting one assistant’s traffic decreased as another’s increased.

A dashed average reference line is visible at approximately 1.6 conversations per day, providing a visual baseline for identifying above-average days per assistant.

### 4. Business Value

The Service-Disposition workbook provides the most direct measure of AI assistant quality. While containment tells you whether users were transferred to a human, Assistant Disposition tells you whether the AI actually succeeded at its designed purpose — even before considering containment. An assistant can achieve 100% containment (no transfers) while having a high Failure rate if it simply ends conversations without resolving user intent. Monitoring the Success/Failure ratio over time, and comparing it before and after assistant updates, is the primary mechanism for measuring AI assistant improvement.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.ixhello.com/ix-hello-reporting/premium-reporting/total-conversations-with-contained-and-transferred-dedicated-sub-view/service-disposition-workbook-assistant-dispositions-dashboard.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
