ibl-data-manager (3.59.0-ai-plus)
API for iblai
Request
API ViewSet for managing call configurations.
This ViewSet provides endpoints to retrieve, create, and update call configurations for mentors. Call configurations define how voice calls are handled for a mentor.
Permissions:
- Accessible only to platform admins.
Endpoints: GET /api/org/{org}/mentors/{mentor_pk}/call-configurations/
- List all call configurations for a specific mentor
- Returns paginated list of call configurations
POST /api/org/{org}/mentors/{mentor_pk}/call-configurations/
- Create a new call configuration for a mentor
- Requires call configuration data in request body
PUT /api/org/{org}/mentors/{mentor_pk}/call-configurations/{id}/
- Update an existing call configuration
- Requires call configuration data in request body
Query Parameters:
- mentor: Filter configurations by mentor ID
Returns:
- CallConfigurationSerializer data
- Mock server
https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/call-configurations/{id}/
https://base.manager.iblai.app/api/ai-mentor/orgs/{org}/users/{user_id}/call-configurations/{id}/
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/call-configurations/{id}/' \
-H 'Authorization: YOUR_API_KEY_HERE'
realtime
- Realtimeinference
- Inference
openai
- Openaigoogle
- Googleelevenlabs
- Elevenlabs
openai
- Openaigoogle
- Googledeepgram
- Deepgramcartesia
- Cartesia
Whether to use function calls in the agent or force RAG calls before LLM generation
{ "id": 0, "mentor": 0, "mode": "realtime", "tts_provider": "openai", "stt_provider": "openai", "llm_provider": "openai", "use_function_calling_for_rag": true, "google_voice": { "id": 0, "name": "string", "provider": "openai", "language": "string", "description": "string", "audio_url": "string" }, "openai_voice": { "id": 0, "name": "string", "provider": "openai", "language": "string", "description": "string", "audio_url": "string" }, "openai_voice_id": 0, "google_voice_id": 0, "enable_video": true, "platform_key": "string" }
Request
API ViewSet for managing call configurations.
This ViewSet provides endpoints to retrieve, create, and update call configurations for mentors. Call configurations define how voice calls are handled for a mentor.
Permissions:
- Accessible only to platform admins.
Endpoints: GET /api/org/{org}/mentors/{mentor_pk}/call-configurations/
- List all call configurations for a specific mentor
- Returns paginated list of call configurations
POST /api/org/{org}/mentors/{mentor_pk}/call-configurations/
- Create a new call configuration for a mentor
- Requires call configuration data in request body
PUT /api/org/{org}/mentors/{mentor_pk}/call-configurations/{id}/
- Update an existing call configuration
- Requires call configuration data in request body
Query Parameters:
- mentor: Filter configurations by mentor ID
Returns:
- CallConfigurationSerializer data
- application/json
- application/x-www-form-urlencoded
- multipart/form-data
realtime
- Realtimeinference
- Inference
openai
- Openaigoogle
- Googleelevenlabs
- Elevenlabs
openai
- Openaigoogle
- Googledeepgram
- Deepgramcartesia
- Cartesia
Whether to use function calls in the agent or force RAG calls before LLM generation
- Mock server
https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/call-configurations/{id}/
https://base.manager.iblai.app/api/ai-mentor/orgs/{org}/users/{user_id}/call-configurations/{id}/
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X PUT \
'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/call-configurations/{id}/' \
-H 'Authorization: YOUR_API_KEY_HERE' \
-H 'Content-Type: application/json' \
-d '{
"mentor": 0,
"mode": "realtime",
"tts_provider": "openai",
"stt_provider": "openai",
"llm_provider": "openai",
"use_function_calling_for_rag": true,
"google_voice": {
"name": "string",
"provider": "openai",
"language": "string",
"description": "string"
},
"openai_voice": {
"name": "string",
"provider": "openai",
"language": "string",
"description": "string"
},
"openai_voice_id": 0,
"google_voice_id": 0,
"enable_video": true
}'
realtime
- Realtimeinference
- Inference
openai
- Openaigoogle
- Googleelevenlabs
- Elevenlabs
openai
- Openaigoogle
- Googledeepgram
- Deepgramcartesia
- Cartesia
Whether to use function calls in the agent or force RAG calls before LLM generation
{ "id": 0, "mentor": 0, "mode": "realtime", "tts_provider": "openai", "stt_provider": "openai", "llm_provider": "openai", "use_function_calling_for_rag": true, "google_voice": { "id": 0, "name": "string", "provider": "openai", "language": "string", "description": "string", "audio_url": "string" }, "openai_voice": { "id": 0, "name": "string", "provider": "openai", "language": "string", "description": "string", "audio_url": "string" }, "openai_voice_id": 0, "google_voice_id": 0, "enable_video": true, "platform_key": "string" }
Request
API ViewSet for managing call configurations.
This ViewSet provides endpoints to retrieve, create, and update call configurations for mentors. Call configurations define how voice calls are handled for a mentor.
Permissions:
- Accessible only to platform admins.
Endpoints: GET /api/org/{org}/mentors/{mentor_pk}/call-configurations/
- List all call configurations for a specific mentor
- Returns paginated list of call configurations
POST /api/org/{org}/mentors/{mentor_pk}/call-configurations/
- Create a new call configuration for a mentor
- Requires call configuration data in request body
PUT /api/org/{org}/mentors/{mentor_pk}/call-configurations/{id}/
- Update an existing call configuration
- Requires call configuration data in request body
Query Parameters:
- mentor: Filter configurations by mentor ID
Returns:
- CallConfigurationSerializer data
- application/json
- application/x-www-form-urlencoded
- multipart/form-data
realtime
- Realtimeinference
- Inference
openai
- Openaigoogle
- Googleelevenlabs
- Elevenlabs
openai
- Openaigoogle
- Googledeepgram
- Deepgramcartesia
- Cartesia
Whether to use function calls in the agent or force RAG calls before LLM generation
- Mock server
https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/call-configurations/{id}/
https://base.manager.iblai.app/api/ai-mentor/orgs/{org}/users/{user_id}/call-configurations/{id}/
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X PATCH \
'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/call-configurations/{id}/' \
-H 'Authorization: YOUR_API_KEY_HERE' \
-H 'Content-Type: application/json' \
-d '{
"mentor": 0,
"mode": "realtime",
"tts_provider": "openai",
"stt_provider": "openai",
"llm_provider": "openai",
"use_function_calling_for_rag": true,
"google_voice": {
"name": "string",
"provider": "openai",
"language": "string",
"description": "string"
},
"openai_voice": {
"name": "string",
"provider": "openai",
"language": "string",
"description": "string"
},
"openai_voice_id": 0,
"google_voice_id": 0,
"enable_video": true
}'
realtime
- Realtimeinference
- Inference
openai
- Openaigoogle
- Googleelevenlabs
- Elevenlabs
openai
- Openaigoogle
- Googledeepgram
- Deepgramcartesia
- Cartesia
Whether to use function calls in the agent or force RAG calls before LLM generation
{ "id": 0, "mentor": 0, "mode": "realtime", "tts_provider": "openai", "stt_provider": "openai", "llm_provider": "openai", "use_function_calling_for_rag": true, "google_voice": { "id": 0, "name": "string", "provider": "openai", "language": "string", "description": "string", "audio_url": "string" }, "openai_voice": { "id": 0, "name": "string", "provider": "openai", "language": "string", "description": "string", "audio_url": "string" }, "openai_voice_id": 0, "google_voice_id": 0, "enable_video": true, "platform_key": "string" }