Skip to content

ibl-data-manager (3.59.0-ai-plus)

API for iblai

Download OpenAPI description
Languages
Servers
Mock server

https://docs.ibl.ai/_mock/apis/ibl/

https://base.manager.iblai.app/

ai-account

Operations

ai-analytics

Operations

ai-bot

Operations

ai-finetuning

Operations

ai-index

Operations

ai-marketing

Operations

ai-media

Operations

ai-mentor

Operations

ai_mentor_orgs_users_mentor_categories_destroy

Request

Delete a mentor category.

Accessible only to tenant admins.

Returns: 204: "No content" when Delete succeeded. 400: Bad request data received

Security
PlatformApiKeyAuthentication
Path
orgstringrequired
user_idstringrequired
curl -i -X DELETE \
  'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/mentor/categories/' \
  -H 'Authorization: YOUR_API_KEY_HERE'

Responses

No response body

ai_mentor_orgs_users_mentor_seed_retrieve

Request

Seed initial mentors in a tenant.

Args: request: The HTTP request. org: The organization/tenant identifier. user_id: The ID of the user initiating the seeding.

Returns: Response: A success message with details about the seeded mentors.

Raises: BadRequest: If the seeding process fails.

Security
PlatformApiKeyAuthentication
Path
orgstringrequired
user_idstringrequired
curl -i -X GET \
  'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/mentor/seed/' \
  -H 'Authorization: YOUR_API_KEY_HERE'

Responses

Bodyapplication/json
detailstringrequired
Response
application/json
{ "detail": "Successfully seeded 5 mentors in the tenant." }

ai_mentor_orgs_users_mentors_retrieve

Request

Retrieve details of a specific mentor by slug or name.

Security
PlatformApiKeyAuthentication
Path
mentorstringrequired
orgstringrequired
user_idstringrequired
curl -i -X GET \
  'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/mentors/{mentor}/' \
  -H 'Authorization: YOUR_API_KEY_HERE'

Responses

Bodyapplication/json
namestring<= 255 charactersrequired
unique_idstring(uuid)
flowanyrequired

The langflow json for the mentor

slugstring<= 255 characters^[-a-zA-Z0-9_]+$
platformstring<= 255 characters
allow_anonymousboolean
metadataany or null
enable_moderationboolean
enable_post_processing_systemboolean
enable_openai_assistantboolean

(Deprecated) Set template mentor to openai-agent instead.

enable_total_groundingboolean

Whether to force mentor to only use information within the provided documents.

enable_suggested_promptsboolean

Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins.

enable_guided_promptsboolean

Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history.

guided_prompt_instructionsstring

Instructions to determine how prompt suggestions are generated.

google_voiceinteger or null
openai_voiceinteger or null
categoriesArray of objects(MentorCategory)required
categories[].​idintegerread-onlyrequired
categories[].​namestring<= 255 charactersrequired
categories[].​descriptionstring or null<= 255 characters
categories[].​category_groupinteger or null
categories[].​audienceobject(MentorAudience)required
categories[].​audience.​idintegerread-onlyrequired
categories[].​audience.​namestring<= 255 charactersrequired
categories[].​audience.​descriptionstring or null<= 255 characters
categories[].​audiencesArray of objects(MentorAudience)required
categories[].​audiences[].​idintegerread-onlyrequired
categories[].​audiences[].​namestring<= 255 charactersrequired
categories[].​audiences[].​descriptionstring or null<= 255 characters
proactive_promptstring

Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally

moderation_system_promptstring

The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries.

post_processing_promptstring

Prompt to be used to alter or modify final llm response into any desired form.

moderation_responsestring

Desired feedback to return to the user when their prompt is deemed inappropriate.

safety_system_promptstring

Prompt to check whether the models response is appropriate or not.

safety_responsestring

Feedback given to the user when a model generates an inappropriate response

enable_safety_systemboolean
proactive_responsestring

Response to start a conversation with a user.

greeting_methodstring
  • proactive_prompt - Proactive Prompt
  • proactive_response - Proactive Response
Enum"proactive_prompt""proactive_response"
call_configurationobject(CallConfiguration)
mcp_serversArray of objects(MCPServer)required
mcp_servers[].​idintegerread-onlyrequired
mcp_servers[].​platformintegerread-onlyrequired
mcp_servers[].​namestring<= 255 charactersrequired
mcp_servers[].​urlstring(uri)<= 200 charactersrequired

The url of the MCP server.

mcp_servers[].​transportstring(TransportEnum)
  • sse - Sse
  • websocket - Websocket
Enum"sse""websocket"
mcp_servers[].​headersany

Headers to send to the MCP server. Useful for authentication,

mcp_servers[].​platform_keystringread-onlyrequired
mcp_servers[].​created_atstring(date-time)read-onlyrequired
mcp_servers[].​updated_atstring(date-time)read-onlyrequired
last_accessed_byinteger or null[ 0 .. 2147483647 ]

edX user ID

recently_accessed_atstring or null(date-time)
created_bystring or null<= 255 characters
created_atstring or null(date-time)read-onlyrequired
updated_atstring or null(date-time)read-onlyrequired
Response
application/json
{ "name": "mentorAI", "platform": "main", "slug": "ai-mentor", "description": "Upbeat, encouraging tutor helping students understand concepts by explaining ideas and asking questions.", "allow_anonymous": false, "pathways": [], "suggested_prompts": [ "" ], "llm_provider": "IBLChatOpenAI", "system_prompt": "Wrap all responses in MARKDOWN formatted text.", "metadata": { "admin": true, "student": true, "featured": true, "allow_to_use_as_template": true }, "proactive_message": "", "moderation_system_prompt": "You are a moderator tasked with identifying whether a prompt from a user is appropriate or inappropriate. ", "enable_moderation": false, "enable_openai_assistant": false, "enable_total_grounding": false, "guided_prompt_instructions": "you are...", "enable_guided_prompts": true, "mcp_servers": [ {} ], "enable_suggested_prompts": true, "safety_system_prompt": "You are a moderator tasked with identifying whether a message from an ai model to a user is is appropriate or inappropriate. If the message is immoral or contains abusive words, insults, damaging content, and law breaking acts, etc it should be deemed inappropriate. Otherwise it is deemed appropriate.", "safety_response": "Sorry, the AI model generated an inappropriate response. Kindly refine your prompt or try again with a different prompt.", "enable_safety_system": false, "categories": [ {} ], "google_voice": { "id": 1, "name": "en-us", "provider": "google", "language": "English (US)", "description": "A deep male voice", "audio_url": "https://public.storage/iblm-public/audio/en-us-1.mp3" }, "openai_voice": { "id": 2, "name": "en-us-female", "provider": "openai", "language": "English (US)", "description": "A soft female voice.", "audio_url": "https://public.storage/iblm-public/audio/en-us-1.mp3" }, "created_by": "system" }

ai-prompt

Operations

career

Operations

catalog

Operations

core

Operations

credentials

Operations

features

Operations

media

Operations

notifications

Operations

scim

Operations

commerce

Operations

recommendations

Operations

reports

Operations

skills

Operations