Skip to content
Last updated

Description

Chat Ratings gives instructors a quick, rolling snapshot of how learners are experiencing a specific mentorAI—by connecting the History (recent chats) and Memory (saved user context) features.

The rating aggregates the past 24 hours of learner interactions and refreshes daily, helping you see what’s working, what’s not, and where to intervene.


Target Audience

Instructor


Features

24-Hour Rolling Rating

Calculates a mentor’s learner-experience rating from the most recent 24 hours of chat activity; updates automatically every day.

History × Memory Integration

Links recent conversation data (History) with user context (Memory) to ground ratings in real usage, not one-off anecdotes.

Per-Mentor View

Ratings are scoped to the specific mentor (e.g., “mentorAI”), allowing accurate comparisons between mentors.

Actionable Insight

Use the rating trend to spot when learners are thriving—or struggling—and prioritize follow-ups or prompt refinements.


How to Use (step by step)

Open the Mentor

  • Select the mentor you want to review (e.g., mentorAI).

Verify Memory Is Enabled

  • Go to Memory to confirm it’s On and (optionally) that Reference Saved Memories is enabled.
  • You can browse which learners have saved memories such as:
    • Personal Information
    • Knowledge Gaps
    • Help Requests
    • Lessons Learned

Check the Chat Rating

  • Open History (or view the rating indicator in the mentor’s overview, if available).
  • View the 24-hour rating that reflects recent learner experiences with this mentor.

Drill Into Evidence

  • In History, review recent transcripts from the same time window to understand why the rating changed.
  • Cross-reference with Memory entries for those users (e.g., known gaps or help requests) to see if the mentor addressed them effectively.

Take Action

  • If the rating dips, adjust one or more factors:
    • Prompts – refine tone, structure, or guidance.
    • Datasets – fill content gaps.
    • Tools – enable relevant features (e.g., Web Search, Code Interpreter).
  • Recheck the rating the next day to assess the impact of your changes.

Pedagogical Use Cases

Early Warning for Struggle

A downward trend signals confusion—review transcripts, add resources, or tweak prompts to clarify key concepts.

Quality & Tone Assurance

Ensure the mentor’s responses align with course expectations; refine the System Prompt or tone as needed.

Measure Improvements

After changing prompts, datasets, or tools, use the next day’s rating to validate that your intervention improved learner experience.

Targeted Support

Combine rating trends with Memory insights (knowledge gaps, help requests) to identify and reach out to specific learners or cohorts needing support.


With Chat Ratings, you get a simple, always-current gauge of learner experience—grounded in the last day of real conversations—so you can keep each mentorAI effective, supportive, and on track.