# search_orgs_users_mentors_retrieve Handle GET requests for tenant-specific mentor search. Args: request: HTTP request object org: Tenant/organization key username: Username of the user making the request Returns: Response: DRF Response object with search results Endpoint: GET /api/search/orgs/{org}/users/{username}/mentors/ Version: 3.59.0-ai-plus ## Query parameters: - `audience` (array) Filter by target audience - `category` (array) Filter by mentor category - `created_by` (string) Filter mentors created by specific user - `featured` (boolean) Filter by featured status - `id` (integer) Retrieve a specific mentor by ID - `limit` (integer) Number of results per page - `llm` (array) Filter by language model type - `offset` (integer) Starting position for pagination - `order_by` (string) Field to sort results by ('created_at', 'recently_accessed_at') - `order_direction` (string) Sort direction ('asc' or 'desc') - `query` (string) Search term to filter mentors by name or description - `tags` (array) Filter by tags - `tenant` (string) Filter by tenant/organization - `unique_id` (string) Retrieve a specific mentor by UUID ## Path parameters: - `org` (string, required) - `username` (string, required) ## Response 200 fields (application/json): - `results` (array, required) List of mentors matching the search criteria - `results.name` (string, required) - `results.unique_id` (string) - `results.flow` (any, required) The langflow json for the mentor - `results.slug` (string) - `results.platform` (string) - `results.allow_anonymous` (boolean) - `results.metadata` (any,null) - `results.enable_moderation` (boolean) - `results.enable_post_processing_system` (boolean) - `results.enable_openai_assistant` (boolean) (Deprecated) Set template mentor to openai-agent instead. - `results.enable_total_grounding` (boolean) Whether to force mentor to only use information within the provided documents. - `results.enable_suggested_prompts` (boolean) Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins. - `results.enable_guided_prompts` (boolean) Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history. - `results.guided_prompt_instructions` (string) Instructions to determine how prompt suggestions are generated. - `results.google_voice` (integer,null) - `results.openai_voice` (integer,null) - `results.categories` (array, required) - `results.categories.id` (integer, required) - `results.categories.description` (string,null) - `results.categories.category_group` (integer,null) - `results.categories.audience` (object, required) - `results.categories.audiences` (array, required) - `results.proactive_prompt` (string) Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally - `results.moderation_system_prompt` (string) The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries. - `results.post_processing_prompt` (string) Prompt to be used to alter or modify final llm response into any desired form. - `results.moderation_response` (string) Desired feedback to return to the user when their prompt is deemed inappropriate. - `results.safety_system_prompt` (string) Prompt to check whether the models response is appropriate or not. - `results.safety_response` (string) Feedback given to the user when a model generates an inappropriate response - `results.enable_safety_system` (boolean) - `results.proactive_response` (string) Response to start a conversation with a user. - `results.greeting_method` (string) How the mentor should greet the user. proactive_prompt: Allow the LLM to respond to proactive_prompt msg. proactive_response: use proactive_response template without performing an LLM call. * - Proactive Prompt * - Proactive Response Enum: "proactive_prompt", "proactive_response" - `results.call_configuration` (object) - `results.call_configuration.mentor` (integer, required) - `results.call_configuration.mode` (string) * - Realtime * - Inference Enum: "realtime", "inference" - `results.call_configuration.tts_provider` (string) * - Openai * - Google * - Elevenlabs Enum: "openai", "google", "elevenlabs" - `results.call_configuration.stt_provider` (string) * - Openai * - Google * - Deepgram * - Cartesia Enum: "openai", "google", "deepgram", "cartesia" - `results.call_configuration.llm_provider` (string) * - Openai * - Google Enum: "openai", "google" - `results.call_configuration.use_function_calling_for_rag` (boolean) Whether to use function calls in the agent or force RAG calls before LLM generation - `results.call_configuration.openai_voice_id` (integer,null, required) - `results.call_configuration.google_voice_id` (integer,null, required) - `results.call_configuration.enable_video` (boolean) Whether to enable video for the call. (applicable only for realtime mode) - `results.call_configuration.platform_key` (string, required) - `results.mcp_servers` (array, required) - `results.mcp_servers.url` (string, required) The url of the MCP server. - `results.mcp_servers.transport` (string) * - Sse * - Websocket Enum: "sse", "websocket" - `results.mcp_servers.headers` (any) Headers to send to the MCP server. Useful for authentication, - `results.mcp_servers.created_at` (string, required) - `results.mcp_servers.updated_at` (string, required) - `results.last_accessed_by` (integer,null) edX user ID - `results.recently_accessed_at` (string,null) - `results.created_by` (string,null) - `count` (integer, required) Total number of mentors matching the search criteria - `next` (string,null, required) URL for the next page of results - `previous` (string,null, required) URL for the previous page of results - `current_page` (integer, required) Current page number - `total_pages` (integer, required) Total number of pages - `facets` (object) Facet information for filtering