# Locations-Disposition-Duration Workbook — Geographic & Duration Analytics

The Locations-Disposition-Duration workbook is the fourth workbook in the Conversations gallery. It provides multi-dimensional analysis that combines three key analytical dimensions — where conversations originate (Location and DNIS), how they end (Connection Disposition and Assistant Disposition), and how long they last (Max, Average, and Min Duration). Together, these dimensions allow teams to answer not just “how many conversations?” but “where are they coming from, how are they ending, and how efficient are they?”

The workbook contains eight sub-views: Conversations by Assistant (Dashboard) | Conversations by Location | Conversations by DNIS | Conversations by Date per Disposition | Max, Avg and Min Durations | Conversations by Date, Location and DNIS List | Conversations by Assistant and Connection Disposition List | Details Page.

⚠ Note: All sub-views in this workbook are currently displaying empty charts in the STG environment because Channel and Assistant filters are set to (None). These views are fully operational in production or when filters are explicitly set. The documentation below describes the full capability of each sub-view based on its filter configuration and analytical design.

### 1. Sub-View Reference — Full Capability Documentation

The following table documents all eight sub-views, their available filters, the analytical question each answer, and the business value they deliver.

### 1.1 Conversations by Assistant (Dashboard)

This is the default landing view for the workbook. It displays conversation volume broken down by the AI assistant that handled each conversation, providing an immediate view of traffic distribution across the assistant portfolio.

Available filters: Date Selection Type, Start Date, End Date, Channel, Stage, Assistant, Business Day, and a secondary Location dimension filter.

What this view answers: Which assistants are handling the most conversations? Is volume distributed evenly across assistants, or is traffic concentrated on a small number? How does assistant-level volume trend over the selected period? By filtering to a single assistant, teams can isolate its traffic pattern and compare against the overall portfolio baseline.

Business value: Assistant traffic distribution is foundational for capacity planning and prioritization. High-volume assistants represent the highest leverage for optimization — even a small improvement in their performance (e.g., reducing handle time by 10%) has the largest operational impact. Low-volume assistants may indicate underutilization, misconfiguration, or a use case that has not been communicated to users.

### 1.2 Conversations by Location

This sub-view breaks down conversation volume by the geographic or infrastructure Location dimension. In the IX Hello platform, Location typically refers to the AWS region or data center location through which the conversation was processed (e.g., us-west-2, eu-central-1), but may also reflect business-defined location attributes depending on platform configuration.

Available filters: Date Selection Type, Start Date, End Date, Channel, Stage, Assistant, Location.

What this view answers: Are conversations distributed evenly across infrastructure regions? Is there an unexpected concentration of traffic in a specific location that may indicate regional routing issues or a data center affinity? Do specific locations show different volume patterns (e.g., is eu-central-1 traffic growing while us-west-2 stays flat)?

Business value: Location-level analytics support infrastructure capacity planning and regional performance monitoring. If a specific location shows a sudden volume spike or drop, it may indicate a routing change, infrastructure event, or regional service disruption. For global enterprises, location data also helps understand geographic demand distribution and informs decisions about where to deploy or scale AI assistant infrastructure.

### 1.3 Conversations by DNIS

DNIS (Dialed Number Identification Service) identifies the specific number, endpoint, or entry point that the user dialed or connected to in order to initiate the conversation. In chat channels, DNIS identifies the specific chat endpoint or service URL. In voice channels, it is the actual phone number dialed. Each DNIS entry represents a distinct service entry point — a specific campaign line, a dedicated support number, or a particular chat widget.

Available filters: Date Selection Type, Start Date, End Date, Channel, Stage, Assistant, DNIS.

What this view answers: Which entry points are generating the most conversation traffic? Is a specific phone number or chat endpoint receiving disproportionate volume? Are newly launched service entry points gaining adoption? Which DNIS values are associated with the highest transfer rates or longest durations?

Business value: DNIS analytics are particularly valuable for organizations running multiple service lines or campaigns. By correlating DNIS with disposition and duration (available in the list views), teams can evaluate the performance of each entry point independently. A DNIS with high abandon rates may indicate the wrong assistant is answering those calls. A DNIS with consistently high volume but low success rates may need a dedicated bot improvement project.

### 1.4 Conversations by Date per Disposition

This sub-view plots daily conversation volume as a multi-line or stacked time-series chart, with each line or segment representing a distinct Connection Disposition. It combines the time-series view of Section 20 with the disposition breakdown of Section 18, creating a unified view of how conversation endings trend over time.

Available filters: Date Selection Type, Start Date, End Date, Channel, Stage, Assistant, Business Day, Connection Disposition.

What this view answers: On which specific days did AppDisconnect, Timeout, UserDisconnect, or Transfer volumes spike or decline? Is there a correlation between the day of week and the prevalence of specific disposition types? Did a bot update on a particular date shift the distribution of disposition outcomes?

Business value: Trending disposition by date is the primary tool for detecting quality regressions tied to specific events. If a deployment on Thursday caused a spike in Timeout dispositions starting Friday, this chart will show it immediately. The Connection Disposition filter allows analysts to focus exclusively on the disposition type of interest — for example, filtering to Transfer only to see whether escalations to human agents are increasing or decreasing over time.

### 1.5 Max, Avg and Min Durations

This sub-view provides statistical duration analysis beyond the simple average shown in the AHT sub-view (Section 21). It surfaces three duration statistics simultaneously: the maximum conversation duration (longest single conversation), the average duration (mean across all conversations), and the minimum duration (shortest single conversation) — all trended over time.

Available filters: Date Selection Type, Start Date, End Date, Channel, Stage, Assistant, Business Day.

What this view answers: How wide is the range of conversation durations? Are there outlier conversations inflating the average? What is the floor (minimum) duration — are very short conversations (e.g., 2–3 seconds) appearing, which may indicate immediate disconnects or technical errors? On days with high average duration, is the maximum also extreme, or are durations uniformly distributed?

Business value: The spread between Max and Min duration is a powerful quality signal. A narrow spread (e.g., Max 90 secs, Min 45 secs, Avg 70 secs) indicates consistent conversation experiences — users are having similar interactions. A wide spread (e.g., Max 1,800 secs, Min 3 secs, Avg 200 secs) indicates high variability — some conversations are extremely long (loops, complex flows) and some extremely short (immediate failures). The minimum duration is particularly revealing: a floor of 2–5 seconds consistently suggests a population of conversations that are failing immediately at session start, likely due to authentication errors, flow misconfigurations, or connectivity issues.

### 1.6 Conversations by Date, Location and DNIS List

This sub-view provides a structured tabular breakdown combining all three geographic and routing dimensions — date, Location, and DNIS — into a single list view. Each row represents a unique combination of date, location, and DNIS endpoint, with the corresponding conversation count.

Available filters: Date Selection Type, Start Date, End Date, Channel, Stage, Assistant, Business Day, DNIS.

What this view answers: For each entry point (DNIS), on each day, and from each location, how many conversations occurred? Did a specific DNIS in a specific region spike on a particular date? Are certain location–DNIS combinations consistently the highest volume intersections?

Business value: This is the most granular routing-level view in the workbook. It enables pinpoint identification of traffic anomalies by triangulating three dimensions simultaneously. For incident response, when a volume spike is detected, this view allows analysts to answer: was it across all DNIS values (platform-wide), or specific to one entry point? Was it across all locations (global), or isolated to one region? This triangulation narrows the scope of investigation dramatically.

### 1.7 Conversations by Assistant and Connection Disposition List

This sub-view is the most comprehensive list view in the workbook. It combines the assistant dimension with full Connection Disposition and location data in a single tabular report, enabling cross-dimensional performance analysis at the assistant level.

Available filters: Date Selection Type, Start Date, End Date, Stage, Channel, Assistant, Business Day, Assistant Disposition, Connection Disposition, Location.

This is the only sub-view in the workbook that exposes both Assistant Disposition and Connection Disposition as simultaneous filters. This enables a uniquely powerful query: for example, filter to Assistant Disposition = Failure AND Connection Disposition = AppDisconnect to find conversations where the bot determined it failed AND the platform force-ended the session — likely the worst possible user experience.

What this view answers: Which assistants generate the highest proportion of specific connection dispositions? Do certain assistants consistently end in Timeout while others end in UserDisconnect? When an assistant marks a conversation as a Failure, what connection disposition is most common? Is there a location that shows a specific assistant performing worse than in other locations?

Business value: The dual-disposition filter makes this the most powerful diagnostic view in the entire Conversations tab for root-cause analysis. By combining Assistant Disposition (the AI’s self-assessment) with Connection Disposition (the technical session ending), analysts can distinguish between: (a) conversations the AI failed but the user stayed engaged (Failure + UserDisconnect — user left after AI gave up), (b) conversations the AI succeeded but the platform crashed (Success + AppDisconnect — technical issue overriding a good experience), and (c) conversations that timed out regardless of AI performance (any disposition + Timeout — user became unresponsive). Each combination suggests a different remediation action.

### 1.8 Details Page

The Details Page is the record-level drill-down for the Locations-Disposition-Duration workbook. It presents individual conversation records with full metadata, enabling analysts to move from aggregate patterns to specific conversations for case-by-case investigation.

Available columns: Organization (UUID), ConversationId (UUID), Conversation Start Date, Conversation Start Time, Conversation End Date, Conversation End Time, ANI, DNIS, Channel, Stage, Assistant, Location, Connection Disposition, Duration (secs).

What this view answers: Given an anomaly detected in any of the sub-views above — for example, a spike in Timeout dispositions from a specific DNIS on a specific date — the Details Page allows the analyst to retrieve the specific ConversationIds involved. Those IDs can then be used in the Conversation Viewer (Basic Reporting) to pull and inspect the full conversation transcript, identifying exactly what went wrong at the message level.

Business value: The Details Page is the bridge between aggregate analytics and individual conversation forensics. It closes the loop between “something is wrong at the dashboard level” and “here is exactly which conversation to investigate in the Viewer.” The Conversational is the key that connects every dashboard in the platform to the underlying transcript data.

### 2. Summary — What This Workbook Is For

The Locations-Disposition-Duration workbook serves teams that need to understand the operational and geographic dimensions of conversation performance — not just how many conversations happened, but where they came from, through which entry point, how they ended, and how long they took. It is the workbook to reach for during incident response (isolate by location and DNIS), quality audits (combine assistant and connection disposition), and capacity planning (identify high-volume locations and entry points). Its eight sub-views form a progressive analytical workflow: start with the dashboard for the overview, narrow by location or DNIS to isolate a pattern, cross-reference with disposition data to understand quality, validate duration statistics to assess efficiency, and drill to the Details Page to retrieve specific records for deeper investigation.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.ixhello.com/ix-hello-reporting/premium-reporting/total-conversations-with-contained-and-transferred-dedicated-sub-view/locations-disposition-duration-workbook-geographic-and-duration-analytics.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
