# ai_mentor_orgs_users_list Retrieve a list of mentors. Returns: - List of mentors matching the filters. Endpoint: GET /api/ai-mentor/orgs/{org}/users/{user_id}/ Version: 3.59.0-ai-plus Security: PlatformApiKeyAuthentication ## Query parameters: - `department_id` (integer) Department to filter by - `filter_by` (string) Filter options include, date, name, default is date - `metadata_key` (string) Metadata key to be queried with - `metadata_value` (string) Metadata value to be filter for - `page` (integer) A page number within the paginated result set. - `page_size` (integer) Number of results to return per page. - `return_session_information` (boolean) Declares if session information should be included in the mentor data - `visibility` (string) visibility type to be queried with ## Path parameters: - `org` (string, required) - `user_id` (string, required) ## Response 200 fields (application/json): - `count` (integer, required) Example: 123 - `next` (string,null) Example: "http://api.example.org/accounts/?page=4" - `previous` (string,null) Example: "http://api.example.org/accounts/?page=2" - `results` (array, required) - `results.name` (string, required) - `results.unique_id` (string) - `results.flow` (any, required) The langflow json for the mentor - `results.slug` (string) - `results.platform` (string) - `results.allow_anonymous` (boolean) - `results.metadata` (any,null) - `results.enable_moderation` (boolean) - `results.enable_post_processing_system` (boolean) - `results.enable_openai_assistant` (boolean) (Deprecated) Set template mentor to openai-agent instead. - `results.enable_total_grounding` (boolean) Whether to force mentor to only use information within the provided documents. - `results.enable_suggested_prompts` (boolean) Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins. - `results.enable_guided_prompts` (boolean) Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history. - `results.guided_prompt_instructions` (string) Instructions to determine how prompt suggestions are generated. - `results.google_voice` (integer,null) - `results.openai_voice` (integer,null) - `results.categories` (array, required) - `results.categories.id` (integer, required) - `results.categories.description` (string,null) - `results.categories.category_group` (integer,null) - `results.categories.audience` (object, required) - `results.categories.audiences` (array, required) - `results.proactive_prompt` (string) Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally - `results.moderation_system_prompt` (string) The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries. - `results.post_processing_prompt` (string) Prompt to be used to alter or modify final llm response into any desired form. - `results.moderation_response` (string) Desired feedback to return to the user when their prompt is deemed inappropriate. - `results.safety_system_prompt` (string) Prompt to check whether the models response is appropriate or not. - `results.safety_response` (string) Feedback given to the user when a model generates an inappropriate response - `results.enable_safety_system` (boolean) - `results.proactive_response` (string) Response to start a conversation with a user. - `results.greeting_method` (string) How the mentor should greet the user. proactive_prompt: Allow the LLM to respond to proactive_prompt msg. proactive_response: use proactive_response template without performing an LLM call. * - Proactive Prompt * - Proactive Response Enum: "proactive_prompt", "proactive_response" - `results.call_configuration` (object) - `results.call_configuration.mentor` (integer, required) - `results.call_configuration.mode` (string) * - Realtime * - Inference Enum: "realtime", "inference" - `results.call_configuration.tts_provider` (string) * - Openai * - Google * - Elevenlabs Enum: "openai", "google", "elevenlabs" - `results.call_configuration.stt_provider` (string) * - Openai * - Google * - Deepgram * - Cartesia Enum: "openai", "google", "deepgram", "cartesia" - `results.call_configuration.llm_provider` (string) * - Openai * - Google Enum: "openai", "google" - `results.call_configuration.use_function_calling_for_rag` (boolean) Whether to use function calls in the agent or force RAG calls before LLM generation - `results.call_configuration.openai_voice_id` (integer,null, required) - `results.call_configuration.google_voice_id` (integer,null, required) - `results.call_configuration.enable_video` (boolean) Whether to enable video for the call. (applicable only for realtime mode) - `results.call_configuration.platform_key` (string, required) - `results.mcp_servers` (array, required) - `results.mcp_servers.url` (string, required) The url of the MCP server. - `results.mcp_servers.transport` (string) * - Sse * - Websocket Enum: "sse", "websocket" - `results.mcp_servers.headers` (any) Headers to send to the MCP server. Useful for authentication, - `results.mcp_servers.created_at` (string, required) - `results.mcp_servers.updated_at` (string, required) - `results.last_accessed_by` (integer,null) edX user ID - `results.recently_accessed_at` (string,null) - `results.created_by` (string,null)