Skip to content
Last updated

Description

The Transcripts tab lets instructors and admins review real conversations between learners and a specific mentorAI—with context and cost metrics.

See summary KPIs (messages per conversation, average rating, estimated cost), search by topic or user, and open any session to read the exact learner inputs and mentor replies.

Use these insights to improve instruction and tune your mentor’s prompts, datasets, and tools.


Target Audience

Instructor · Administrator


Features

Session KPIs at a Glance

Headline metrics for the selected period:

  • Avg messages per conversation
  • Avg conversation cost
  • Avg rating

Global Counts

Totals for conversations, user queries, and assistant responses in the time window.

Search & Filter

Find transcripts by topic (e.g., “mitosis”) or by user when you need to check on a particular learner.

Per-Conversation Details

For each transcript, view:

  • User name/username (or Anonymous)
  • Mentor name
  • LLM model
  • Messages exchanged
  • Estimated cost
  • Timestamp

Full Conversation Viewer

Read the exact learner questions and the mentor’s responses to evaluate clarity, tone, and accuracy.

Exports Available Elsewhere

Download conversation data from Data Reports → Chat History when you need spreadsheets or BI analysis;
quick viewing is also available in Chat History.


How to Use (step by step)

Open Analytics → Transcripts

  • In the mentor header, click Analytics, then select the Transcripts tab.

Review Summary Metrics

  • Scan avg messages/conversation, avg cost, and avg rating to gauge conversation quality and efficiency.

Search by Topic or User

  • Use the search bar to locate transcripts about a specific concept or for a specific learner you want to check in on.

Inspect Global Counts

  • Note totals for conversations, user queries, and assistant responses to understand overall load and activity.

Open a Transcript

Click any session to view:

  • User identity (or Anonymous) and username (if login is required)
  • Mentor, LLM model, messages exchanged, estimated cost, and timestamp
  • The exact Q&A exchange between learner and mentor

Decide Follow-Ups

  • If you see confusion or low ratings, adjust:
    • Prompts
    • Datasets
    • Tools (e.g., Web Search, Code Interpreter)
  • Reach out to specific learners based on what you observe.

Export if Needed

  • Go to Data Reports → Chat History to download CSVs for deeper analysis or archival.

Pedagogical Use Cases

Identify Knowledge Gaps

Spot recurring misunderstandings and create targeted reviews, examples, or mini-lessons.

Quality & Tone Assurance

Ensure responses are accurate, on-brand, and student-friendly; refine the System Prompt where needed.

Support at the Right Time

Use user-level transcripts to reach out with resources or office-hour invitations.

Measure Impact of Changes

After updating prompts, datasets, or tools, compare new transcripts’ ratings, message lengths, and costs.

Assessment & Curriculum Tuning

Topics that dominate transcripts may indicate where lecture materials or assignments need clarification.


Use Transcripts to move beyond surface metrics—read the conversations themselves, understand learner needs, and continuously improve both your teaching and your mentorAI.