API for iblai
- ai_mentor_orgs_users_periodic_agents_statistics_retrieve
ibl-data-manager (4.84.1-ai-plus)
Request
Endpoint to create and view, update and delete periodic agents.
Periodic agents are schedulers issued for mentors. These are configured with input prompt (if any) as well as a cron schedule to trigger the periodic agent.
Access to these are restricted to platform admins and tenant administrators
Session information for running the periodic agent will be generated with the credentials of the user (platform administrator) who created the agent.
A Periodic Agent is allowed to have a callback_url and callback_secret. When a callback_url is set for a Periodic Agent, a post request with data entries containing the log and timestamp of the run will be made to the callback_url provided. Here is the payload structure:
{
"timestamp": "timestamp when the run completed",
"status": "status of the periodic agent",
"prompt": "input prompt to agent,
"agent_output": "...final response of agent",
"log": "... full agent run log",
"log_id": periodic agent log id.,
}The payload is encrypted using the callback_secret provided.
You can validate the payload using the X-Hub-Signature-256 signature header for request data. This is a Sha256 encoded HMAC hex digest of the payload body.
import hmac
import haslib
def validate_payload(request: HttpRequest, callback_secret: str):
# Get the X-Hub-Signature-256 header from the request
received_signature = request.META.get("HTTP_X_HUB_SIGNATURE_256", "")
if not received_signature.startswith("sha256="):
# Invalid signature format
return False
received_signature = received_signature[len("sha256=") :]
try:
# Get the raw request body
payload = request.body
# Compute the expected signature using the app_secret
expected_signature = hmac.new(
callback_secret.encode(), payload, hashlib.sha256
).hexdigest()
if hmac.compare_digest(received_signature, expected_signature):
# Signatures match, the payload is genuine
return True
else:
# Signatures don't match
return False
except Exception as e:
# Handle any errors that may occur during validation
return False
- Mock serverhttps://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/periodic-agents/{id}/
- https://base.manager.iblai.app/api/ai-mentor/orgs/{org}/users/{user_id}/periodic-agents/{id}/
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X DELETE \
'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/periodic-agents/{id}/' \
-H 'Authorization: YOUR_API_KEY_HERE'- Mock serverhttps://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/periodic-agents/statistics/
- https://base.manager.iblai.app/api/ai-mentor/orgs/{org}/users/{user_id}/periodic-agents/statistics/
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/periodic-agents/statistics/?mentor_id=497f6eca-6276-4993-bfeb-53cbbbba6f08' \
-H 'Authorization: YOUR_API_KEY_HERE'{ "total_tasks": 0, "succeeded_tasks": 0, "failed_tasks": 0, "running_tasks": 0, "pending_tasks": 0 }
- Mock serverhttps://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/pin-message/
- https://base.manager.iblai.app/api/ai-mentor/orgs/{org}/users/{user_id}/pin-message/
- curl
- JavaScript
- Node.js
- Python
- Java
- C#
- PHP
- Go
- Ruby
- R
- Payload
curl -i -X GET \
'https://docs.ibl.ai/_mock/apis/ibl/api/ai-mentor/orgs/{org}/users/{user_id}/pin-message/?session_id=string' \
-H 'Authorization: YOUR_API_KEY_HERE'Whether to force mentor to only use information within the provided documents.
Whether to show suggested prompts for the mentor or not. Note: Suggested prompts are created by tenant admins.
Whether to show suggested prompts for the mentor or not. Note: Guided prompts are created with an llm based on chat history.
Instructions to determine how prompt suggestions are generated.
Prompt template used to start a conversation with the user when greeting_type is proactive_prompt. This will be sent to the LLM so it can respond naturally
Allow embedded mentor to read content on the embedded web page.
Placeholder to be shown in the input text area when the mentor is used.
The prompt for the moderation system. This prompt must clearly distinguish between 'Approapriate' and 'Not Appropriate' queries.
Prompt to be used to alter or modify final llm response into any desired form.
Desired feedback to return to the user when their prompt is deemed inappropriate.
Prompt to check whether the models response is appropriate or not.
Feedback given to the user when a model generates an inappropriate response
sse- Ssewebsocket- Websocketstreamable_http- Streamable Http
Headers to send to the MCP server. Useful for authentication,
Featured mcp servers will be accessible to all other tenants.
[ [ { … } ] ]