Skip to content

ibl-data-manager (3.59.0-ai-plus)

API for iblai

Download OpenAPI description
Languages
Servers
Mock server

https://docs.ibl.ai/_mock/apis/ibl/

https://base.manager.iblai.app/

ai-account

Operations

ai-analytics

Operations

ai-bot

Operations

ai-finetuning

Operations

ai-index

Operations

ai-marketing

Operations

ai-media

Operations

ai-mentor

Operations

ai_mentor_orgs_users_list

Request

Retrieve a list of mentors.

Returns:

  • List of mentors matching the filters.
Security
PlatformApiKeyAuthentication
Path
orgstringrequired
user_idstringrequired
Query
department_idinteger

Department to filter by

filter_bystringnon-empty

Filter options include, date, name, default is date

metadata_keystringnon-empty

Metadata key to be queried with

metadata_valuestringnon-empty

Metadata value to be filter for

pageinteger

A page number within the paginated result set.

page_sizeinteger

Number of results to return per page.

return_session_informationboolean

Declares if session information should be included in the mentor data

visibilitystringnon-empty

visibility type to be queried with

curl -i -X GET \
  'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/?department_id=0&filter_by=string&metadata_key=string&metadata_value=string&page=0&page_size=0&return_session_information=true&visibility=string' \
  -H 'Authorization: YOUR_API_KEY_HERE'

Responses

Bodyapplication/json
countintegerrequired
Example: 123
nextstring or null(uri)
Example: "http://api.example.org/accounts/?page=4"
previousstring or null(uri)
Example: "http://api.example.org/accounts/?page=2"
resultsArray of objects(Mentor)required
results[].​namestring<= 255 charactersrequired
results[].​unique_idstring(uuid)
results[].​flowanyrequired

The langflow json for the mentor

results[].​slugstring<= 255 characters^[-a-zA-Z0-9_]+$
results[].​platformstring<= 255 characters
results[].​allow_anonymousboolean
results[].​metadataany or null
results[].​enable_moderationboolean
results[].​enable_post_processing_systemboolean
results[].​enable_openai_assistantboolean

(Deprecated) Set template mentor to openai-agent instead.

results[].​enable_total_groundingboolean

Whether to force mentor to only use information within the provided documents.

results[].​enable_suggested_promptsboolean

Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins.

results[].​enable_guided_promptsboolean

Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history.

results[].​guided_prompt_instructionsstring

Instructions to determine how prompt suggestions are generated.

results[].​google_voiceinteger or null
results[].​openai_voiceinteger or null
results[].​categoriesArray of objects(MentorCategory)required
results[].​categories[].​idintegerread-onlyrequired
results[].​categories[].​namestring<= 255 charactersrequired
results[].​categories[].​descriptionstring or null<= 255 characters
results[].​categories[].​category_groupinteger or null
results[].​categories[].​audienceobject(MentorAudience)required
results[].​categories[].​audience.​idintegerread-onlyrequired
results[].​categories[].​audience.​namestring<= 255 charactersrequired
results[].​categories[].​audience.​descriptionstring or null<= 255 characters
results[].​categories[].​audiencesArray of objects(MentorAudience)required
results[].​categories[].​audiences[].​idintegerread-onlyrequired
results[].​categories[].​audiences[].​namestring<= 255 charactersrequired
results[].​categories[].​audiences[].​descriptionstring or null<= 255 characters
results[].​proactive_promptstring

Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally

results[].​moderation_system_promptstring

The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries.

results[].​post_processing_promptstring

Prompt to be used to alter or modify final llm response into any desired form.

results[].​moderation_responsestring

Desired feedback to return to the user when their prompt is deemed inappropriate.

results[].​safety_system_promptstring

Prompt to check whether the models response is appropriate or not.

results[].​safety_responsestring

Feedback given to the user when a model generates an inappropriate response

results[].​enable_safety_systemboolean
results[].​proactive_responsestring

Response to start a conversation with a user.

results[].​greeting_methodstring
  • proactive_prompt - Proactive Prompt
  • proactive_response - Proactive Response
Enum"proactive_prompt""proactive_response"
results[].​call_configurationobject(CallConfiguration)
results[].​mcp_serversArray of objects(MCPServer)required
results[].​mcp_servers[].​idintegerread-onlyrequired
results[].​mcp_servers[].​platformintegerread-onlyrequired
results[].​mcp_servers[].​namestring<= 255 charactersrequired
results[].​mcp_servers[].​urlstring(uri)<= 200 charactersrequired

The url of the MCP server.

results[].​mcp_servers[].​transportstring(TransportEnum)
  • sse - Sse
  • websocket - Websocket
Enum"sse""websocket"
results[].​mcp_servers[].​headersany

Headers to send to the MCP server. Useful for authentication,

results[].​mcp_servers[].​platform_keystringread-onlyrequired
results[].​mcp_servers[].​created_atstring(date-time)read-onlyrequired
results[].​mcp_servers[].​updated_atstring(date-time)read-onlyrequired
results[].​last_accessed_byinteger or null[ 0 .. 2147483647 ]

edX user ID

results[].​recently_accessed_atstring or null(date-time)
results[].​created_bystring or null<= 255 characters
results[].​created_atstring or null(date-time)read-onlyrequired
results[].​updated_atstring or null(date-time)read-onlyrequired
Response
application/json
{ "count": 123, "next": "http://api.example.org/accounts/?page=4", "previous": "http://api.example.org/accounts/?page=2", "results": [ [] ] }

ai_mentor_orgs_users_create

Request

Create a new mentor.

Body Parameters:

  • name: Mentor name.
  • unique_id: Unique identifier.
  • platform_key: Associated platform.
  • metadata: Additional mentor attributes.
Security
PlatformApiKeyAuthentication
Path
orgstringrequired
user_idstringrequired
Query
department_idinteger

Department to filter by

filter_bystringnon-empty

Filter options include, date, name, default is date

metadata_keystringnon-empty

Metadata key to be queried with

metadata_valuestringnon-empty

Metadata value to be filter for

return_session_informationboolean

Declares if session information should be included in the mentor data

visibilitystringnon-empty

visibility type to be queried with

Bodyrequired
namestring<= 255 charactersrequired
unique_idstring(uuid)
flowanyrequired

The langflow json for the mentor

slugstring<= 255 characters^[-a-zA-Z0-9_]+$
platformstring<= 255 characters
allow_anonymousboolean
metadataany or null
enable_moderationboolean
enable_post_processing_systemboolean
enable_openai_assistantboolean

(Deprecated) Set template mentor to openai-agent instead.

enable_total_groundingboolean

Whether to force mentor to only use information within the provided documents.

enable_suggested_promptsboolean

Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins.

enable_guided_promptsboolean

Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history.

google_voiceinteger or null
openai_voiceinteger or null
guided_prompt_instructionsstring

Instructions to determine how prompt suggestions are generated.

categoriesArray of integers
proactive_promptstring

Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally

moderation_system_promptstring

The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries.

post_processing_promptstring

Prompt to be used to alter or modify final llm response into any desired form.

moderation_responsestring

Desired feedback to return to the user when their prompt is deemed inappropriate.

safety_system_promptstring

Prompt to check whether the models response is appropriate or not.

safety_responsestring

Feedback given to the user when a model generates an inappropriate response

disable_chathistoryboolean
enable_safety_systemboolean
proactive_responsestring

Response to start a conversation with a user.

mcp_serversArray of integers
greeting_methodstring
  • proactive_prompt - Proactive Prompt
  • proactive_response - Proactive Response
Enum"proactive_prompt""proactive_response"
last_accessed_byinteger or null[ 0 .. 2147483647 ]

edX user ID

recently_accessed_atstring or null(date-time)
created_bystring or null<= 255 characters
curl -i -X POST \
  'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/?department_id=0&filter_by=string&metadata_key=string&metadata_value=string&return_session_information=true&visibility=string' \
  -H 'Authorization: YOUR_API_KEY_HERE' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "John Doe",
    "unique_id": "1234",
    "platform_key": "main",
    "metadata": {
      "specialty": "AI"
    }
  }'

Responses

Bodyapplication/json
namestring<= 255 charactersrequired
unique_idstring(uuid)
flowanyrequired

The langflow json for the mentor

slugstring<= 255 characters^[-a-zA-Z0-9_]+$
platformstring<= 255 characters
allow_anonymousboolean
metadataany or null
enable_moderationboolean
enable_post_processing_systemboolean
enable_openai_assistantboolean

(Deprecated) Set template mentor to openai-agent instead.

enable_total_groundingboolean

Whether to force mentor to only use information within the provided documents.

enable_suggested_promptsboolean

Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins.

enable_guided_promptsboolean

Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history.

guided_prompt_instructionsstring

Instructions to determine how prompt suggestions are generated.

google_voiceinteger or null
openai_voiceinteger or null
categoriesArray of objects(MentorCategory)required
categories[].​idintegerread-onlyrequired
categories[].​namestring<= 255 charactersrequired
categories[].​descriptionstring or null<= 255 characters
categories[].​category_groupinteger or null
categories[].​audienceobject(MentorAudience)required
categories[].​audience.​idintegerread-onlyrequired
categories[].​audience.​namestring<= 255 charactersrequired
categories[].​audience.​descriptionstring or null<= 255 characters
categories[].​audiencesArray of objects(MentorAudience)required
categories[].​audiences[].​idintegerread-onlyrequired
categories[].​audiences[].​namestring<= 255 charactersrequired
categories[].​audiences[].​descriptionstring or null<= 255 characters
proactive_promptstring

Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally

moderation_system_promptstring

The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries.

post_processing_promptstring

Prompt to be used to alter or modify final llm response into any desired form.

moderation_responsestring

Desired feedback to return to the user when their prompt is deemed inappropriate.

safety_system_promptstring

Prompt to check whether the models response is appropriate or not.

safety_responsestring

Feedback given to the user when a model generates an inappropriate response

enable_safety_systemboolean
proactive_responsestring

Response to start a conversation with a user.

greeting_methodstring
  • proactive_prompt - Proactive Prompt
  • proactive_response - Proactive Response
Enum"proactive_prompt""proactive_response"
call_configurationobject(CallConfiguration)
mcp_serversArray of objects(MCPServer)required
mcp_servers[].​idintegerread-onlyrequired
mcp_servers[].​platformintegerread-onlyrequired
mcp_servers[].​namestring<= 255 charactersrequired
mcp_servers[].​urlstring(uri)<= 200 charactersrequired

The url of the MCP server.

mcp_servers[].​transportstring(TransportEnum)
  • sse - Sse
  • websocket - Websocket
Enum"sse""websocket"
mcp_servers[].​headersany

Headers to send to the MCP server. Useful for authentication,

mcp_servers[].​platform_keystringread-onlyrequired
mcp_servers[].​created_atstring(date-time)read-onlyrequired
mcp_servers[].​updated_atstring(date-time)read-onlyrequired
last_accessed_byinteger or null[ 0 .. 2147483647 ]

edX user ID

recently_accessed_atstring or null(date-time)
created_bystring or null<= 255 characters
created_atstring or null(date-time)read-onlyrequired
updated_atstring or null(date-time)read-onlyrequired
Response
application/json
{ "name": "John Doe", "unique_id": "1234", "platform_key": "main", "metadata": { "specialty": "AI" } }

ai_mentor_orgs_users_retrieve

Request

API ViewSet for managing mentors.

Provides endpoints to retrieve, create, update, and delete mentor data.

Permissions:

  • Accessible to both tenant admins and students.
Security
PlatformApiKeyAuthentication
Path
namestringrequired
orgstringrequired
user_idstringrequired
Query
department_idinteger

Department to filter by

filter_bystringnon-empty

Filter options include, date, name, default is date

metadata_keystringnon-empty

Metadata key to be queried with

metadata_valuestringnon-empty

Metadata value to be filter for

return_session_informationboolean

Declares if session information should be included in the mentor data

visibilitystringnon-empty

visibility type to be queried with

curl -i -X GET \
  'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/{name}/?department_id=0&filter_by=string&metadata_key=string&metadata_value=string&return_session_information=true&visibility=string' \
  -H 'Authorization: YOUR_API_KEY_HERE'

Responses

Bodyapplication/json
namestring<= 255 charactersrequired
unique_idstring(uuid)
flowanyrequired

The langflow json for the mentor

slugstring<= 255 characters^[-a-zA-Z0-9_]+$
platformstring<= 255 characters
allow_anonymousboolean
metadataany or null
enable_moderationboolean
enable_post_processing_systemboolean
enable_openai_assistantboolean

(Deprecated) Set template mentor to openai-agent instead.

enable_total_groundingboolean

Whether to force mentor to only use information within the provided documents.

enable_suggested_promptsboolean

Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins.

enable_guided_promptsboolean

Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history.

guided_prompt_instructionsstring

Instructions to determine how prompt suggestions are generated.

google_voiceinteger or null
openai_voiceinteger or null
categoriesArray of objects(MentorCategory)required
categories[].​idintegerread-onlyrequired
categories[].​namestring<= 255 charactersrequired
categories[].​descriptionstring or null<= 255 characters
categories[].​category_groupinteger or null
categories[].​audienceobject(MentorAudience)required
categories[].​audience.​idintegerread-onlyrequired
categories[].​audience.​namestring<= 255 charactersrequired
categories[].​audience.​descriptionstring or null<= 255 characters
categories[].​audiencesArray of objects(MentorAudience)required
categories[].​audiences[].​idintegerread-onlyrequired
categories[].​audiences[].​namestring<= 255 charactersrequired
categories[].​audiences[].​descriptionstring or null<= 255 characters
proactive_promptstring

Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally

moderation_system_promptstring

The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries.

post_processing_promptstring

Prompt to be used to alter or modify final llm response into any desired form.

moderation_responsestring

Desired feedback to return to the user when their prompt is deemed inappropriate.

safety_system_promptstring

Prompt to check whether the models response is appropriate or not.

safety_responsestring

Feedback given to the user when a model generates an inappropriate response

enable_safety_systemboolean
proactive_responsestring

Response to start a conversation with a user.

greeting_methodstring
  • proactive_prompt - Proactive Prompt
  • proactive_response - Proactive Response
Enum"proactive_prompt""proactive_response"
call_configurationobject(CallConfiguration)
mcp_serversArray of objects(MCPServer)required
mcp_servers[].​idintegerread-onlyrequired
mcp_servers[].​platformintegerread-onlyrequired
mcp_servers[].​namestring<= 255 charactersrequired
mcp_servers[].​urlstring(uri)<= 200 charactersrequired

The url of the MCP server.

mcp_servers[].​transportstring(TransportEnum)
  • sse - Sse
  • websocket - Websocket
Enum"sse""websocket"
mcp_servers[].​headersany

Headers to send to the MCP server. Useful for authentication,

mcp_servers[].​platform_keystringread-onlyrequired
mcp_servers[].​created_atstring(date-time)read-onlyrequired
mcp_servers[].​updated_atstring(date-time)read-onlyrequired
last_accessed_byinteger or null[ 0 .. 2147483647 ]

edX user ID

recently_accessed_atstring or null(date-time)
created_bystring or null<= 255 characters
created_atstring or null(date-time)read-onlyrequired
updated_atstring or null(date-time)read-onlyrequired
Response
application/json
{ "name": "string", "unique_id": "677cf8c4-9b37-4a4b-b000-2ee947357c3a", "flow": null, "slug": "string", "platform": "string", "allow_anonymous": true, "metadata": null, "enable_moderation": true, "enable_post_processing_system": true, "enable_openai_assistant": true, "enable_total_grounding": true, "enable_suggested_prompts": true, "enable_guided_prompts": true, "guided_prompt_instructions": "string", "google_voice": 0, "openai_voice": 0, "categories": [ {} ], "proactive_prompt": "string", "moderation_system_prompt": "string", "post_processing_prompt": "string", "moderation_response": "string", "safety_system_prompt": "string", "safety_response": "string", "enable_safety_system": true, "proactive_response": "string", "greeting_method": "proactive_prompt", "call_configuration": { "id": 0, "mentor": 0, "mode": "realtime", "tts_provider": "openai", "stt_provider": "openai", "llm_provider": "openai", "use_function_calling_for_rag": true, "google_voice": {}, "openai_voice": {}, "openai_voice_id": 0, "google_voice_id": 0, "enable_video": true, "platform_key": "string" }, "mcp_servers": [ {} ], "last_accessed_by": 2147483647, "recently_accessed_at": "2019-08-24T14:15:22Z", "created_by": "string", "created_at": "2019-08-24T14:15:22Z", "updated_at": "2019-08-24T14:15:22Z" }

ai-prompt

Operations

career

Operations

catalog

Operations

core

Operations

credentials

Operations

features

Operations

media

Operations

notifications

Operations

scim

Operations

commerce

Operations

recommendations

Operations

reports

Operations

skills

Operations