Copilot Dashboard
To access the Copilot Dashboard, first navigate to Copilot page and select the desired Copilot from the available list. Then, click on the Actions for the desired copilot, and select View history from the actions menu. Once you are on the Copilot History page, click the Dashboard button to view the copilot dashboard.
Below are the particulars for comprehending the copilot dashboard.
Data can be viewed and analyzed at both daily and hourly levels of detail. The default level of granularity, is set to hourly.
The dashboard is set to showcase data ranging from the previous day's date to the current date, with hourly granularity.
When selecting Hourly Granularity, select a date range that does not exceed 48 hours. Your start and end dates must fall within this 2-day period.
When selecting Daily Granularity, the date range cannot exceed 30 days. Ensure that your selected dates are no more than a month apart.
When a date filter or granularity is changed, all the charts update with the appropriate values dynamically.
When you click on the Clear filter option, the selected date range, granularity reset to their default values.
Cost and usage details for the copilot can be visualized using the following dashboards: Total Cost, Total API Requests, and Total API Tokens.
Total cost
This dashboard provides a daily or hourly breakdown of the costs associated with the selected copilot for the specified time period.

Total API Requests
This dashboard illustrates the total number of API requests made to the copilot within the selected timeframe.

Total Tokens
This dashboard illustrates the count of input and output tokens processed by the copilot over the selected timeframe.

Top copilot users by number of questions
This graph illustrates the top Copilot users plotted against the number of questions asked by them within the specified date range.

Filter by
The Filter by capability enables users to apply configurable criteria to limit the records returned and displayed (e.g., requests, questions, responses, and related metadata). Once the criteria are selected and the query is executed, the results are updated to show only records that satisfy the specified conditions. All summaries and insights are generated exclusively from this filtered dataset.
Available filter categories:
Date Range : Records within a defined time window.
Sessions : Records matching user name or API session.
Feedback : Records containing specified feedback signals (e.g., vote up, vote down).
Status : Records categorized by outcome (e.g., success, failure).
Topic Analysis: Records associated with selected topics/classifications.
Online Evaluation Filters : Records filtered by evaluation metrics and prompt evaluation attributes.
Guardrail Filters : Records filtered by guardrail outcomes and related checks.

Copilot Activity
The Copilot Activity dashboard delivers a consolidated, high-level view of Copilot utilization and operational performance. It provides visibility into adoption metrics, request success/failure rates, response latency and efficiency, topic distribution and trends, and user feedback signals, enabling teams to continuously monitor service health and rapidly pinpoint optimization opportunities.
Copilot Users
Highlights overall engagement by showing the top copilot users based on the number of questions asked. This view helps understand adoption patterns and whether usage is distributed broadly or concentrated among a few active users

Request Status
Shows how copilot requests are performing across key states such as Success, Failed, In Progress, and Aborted. It provides a quick reliability check and helps detect stability or error trends in the request pipeline.

Feedback
The Feedback section consolidates user quality signals such as Thumbs Up/Down, ratings, and accuracy assessments. It supports evaluation of response effectiveness and helps identify improvement opportunities.

Processing Time per Request
This visualization presents the processing duration (in seconds) for each copilot request, providing clear visibility into response latency and performance variability across requests. Each data point corresponds to a single request.
On hover, the chart displays the Request ID along with the exact processing time for that request. On click, it navigates to the associated trace view, enabling deeper diagnostics and end-to-end analysis for the selected Request ID.

Topic Trends
The Topic Trends chart provides visibility into how Topic Mentions for a selected topic change across Time.

Topic distribution
The Topic Distribution chart summarizes how overall Topic Mentions are allocated across topics within the selected context. It provides an at-a-glance view of the relative contribution of each topic to total activity.

Online Evaluation Metrics
The Online Evaluation Metrics chart provides a consolidated view of response quality scores in Copilot when node-level Online Evaluation is enabled in the recipe. These scores are generated by the NLA (Natural Language Assistant) endpoint and are surfaced in copilot history.
The chart is driven by the Select Node setting; changing the selection refreshes the metrics for the selected node and updates the title accordingly.
Online Evaluation Metrics – Prompt
Online Evaluation Metrics – Agent
The chart summarizes performance across:
Answer Relevancy: alignment of the response with the user request and intent.
Answer Faithfulness: grounding of the response in available context, without unsupported assertions.
Context Sufficiency: adequacy of available context to generate a reliable response.
Guideline Adherence: compliance with configured guidance and behavioral constraints.
For each metric, the chart reports:
Mean: average score across evaluated executions.
Median: representative score that is less sensitive to outliers.
Standard Deviation: variability of scores across executions, indicating consistency (lower) versus fluctuation (higher).

Last updated