ibl-data-manager (3.59.0-ai-plus)
API for iblai
- Mock server
https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/
https://base.manager.iblai.app/api/ai-mentor/orgs/{org}/users/{user_id}/
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/?department_id=0&filter_by=string&metadata_key=string&metadata_value=string&page=0&page_size=0&return_session_information=true&visibility=string' \
-H 'Authorization: YOUR_API_KEY_HERE'
Whether to force mentor to only use information within the provided documents.
Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins.
Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history.
Instructions to determine how prompt suggestions are generated.
Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally
The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries.
Prompt to be used to alter or modify final llm response into any desired form.
Desired feedback to return to the user when their prompt is deemed inappropriate.
Prompt to check whether the models response is appropriate or not.
Feedback given to the user when a model generates an inappropriate response
proactive_prompt
- Proactive Promptproactive_response
- Proactive Response
sse
- Ssewebsocket
- Websocket
{ "count": 123, "next": "http://api.example.org/accounts/?page=4", "previous": "http://api.example.org/accounts/?page=2", "results": [ [ … ] ] }
- application/json
- application/x-www-form-urlencoded
- multipart/form-data
Whether to force mentor to only use information within the provided documents.
Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins.
Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history.
Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally
The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries.
Prompt to be used to alter or modify final llm response into any desired form.
Desired feedback to return to the user when their prompt is deemed inappropriate.
proactive_prompt
- Proactive Promptproactive_response
- Proactive Response
- Mock server
https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/
https://base.manager.iblai.app/api/ai-mentor/orgs/{org}/users/{user_id}/
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X POST \
'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/?department_id=0&filter_by=string&metadata_key=string&metadata_value=string&return_session_information=true&visibility=string' \
-H 'Authorization: YOUR_API_KEY_HERE' \
-H 'Content-Type: application/json' \
-d '{
"name": "John Doe",
"unique_id": "1234",
"platform_key": "main",
"metadata": {
"specialty": "AI"
}
}'
Whether to force mentor to only use information within the provided documents.
Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins.
Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history.
Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally
The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries.
Prompt to be used to alter or modify final llm response into any desired form.
Desired feedback to return to the user when their prompt is deemed inappropriate.
proactive_prompt
- Proactive Promptproactive_response
- Proactive Response
{ "name": "John Doe", "unique_id": "1234", "platform_key": "main", "metadata": { "specialty": "AI" } }
- Mock server
https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/{name}/
https://base.manager.iblai.app/api/ai-mentor/orgs/{org}/users/{user_id}/{name}/
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/{name}/?department_id=0&filter_by=string&metadata_key=string&metadata_value=string&return_session_information=true&visibility=string' \
-H 'Authorization: YOUR_API_KEY_HERE'
Whether to force mentor to only use information within the provided documents.
Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins.
Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history.
Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally
The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries.
Prompt to be used to alter or modify final llm response into any desired form.
Desired feedback to return to the user when their prompt is deemed inappropriate.
proactive_prompt
- Proactive Promptproactive_response
- Proactive Response
{ "name": "string", "unique_id": "677cf8c4-9b37-4a4b-b000-2ee947357c3a", "flow": null, "slug": "string", "platform": "string", "allow_anonymous": true, "metadata": null, "enable_moderation": true, "enable_post_processing_system": true, "enable_openai_assistant": true, "enable_total_grounding": true, "enable_suggested_prompts": true, "enable_guided_prompts": true, "guided_prompt_instructions": "string", "google_voice": 0, "openai_voice": 0, "categories": [ { … } ], "proactive_prompt": "string", "moderation_system_prompt": "string", "post_processing_prompt": "string", "moderation_response": "string", "safety_system_prompt": "string", "safety_response": "string", "enable_safety_system": true, "proactive_response": "string", "greeting_method": "proactive_prompt", "call_configuration": { "id": 0, "mentor": 0, "mode": "realtime", "tts_provider": "openai", "stt_provider": "openai", "llm_provider": "openai", "use_function_calling_for_rag": true, "google_voice": { … }, "openai_voice": { … }, "openai_voice_id": 0, "google_voice_id": 0, "enable_video": true, "platform_key": "string" }, "mcp_servers": [ { … } ], "last_accessed_by": 2147483647, "recently_accessed_at": "2019-08-24T14:15:22Z", "created_by": "string", "created_at": "2019-08-24T14:15:22Z", "updated_at": "2019-08-24T14:15:22Z" }