# ai_finetuning_v1_org_user_trainings_create Mixin that includes the StudentTokenAuthentication and IsPlatformAdmin Endpoint: POST /api/ai-finetuning/v1/org/{org}/user/{username}/trainings/ Version: 3.59.0-ai-plus Security: PlatformApiKeyAuthentication ## Path parameters: - `org` (string, required) - `username` (string, required) ## Request fields (application/json): - `project_name` (string, required) - `dataset` (string, required) - `base_model_name` (string, required) - `provider` (string) * - Openai Enum: "openai" - `text_column` (string) - `learning_rate` (number) - `batch_size` (integer) - `num_epochs` (integer) - `block_size` (integer) - `warmup_ratio` (number) - `lora_r` (integer) - `lora_alpha` (integer) - `lora_dropout` (number) - `weight_decay` (number) - `gradient_accumulation` (integer) - `use_peft` (boolean) - `use_fp16` (boolean) - `use_int4` (boolean) - `push_to_hub` (boolean) - `repo_id` (string) - `preprocess_dataset` (boolean) - `prompt_column` (string) - `prompt_prefix` (string) - `prompt_suffix` (string) - `response_prefix` (string) ## Response 201 fields (application/json): - `id` (string, required) - `project_name` (string, required) - `dataset` (string, required) - `base_model_name` (string, required) - `provider` (string) * - Openai Enum: "openai" - `text_column` (string) - `learning_rate` (number) - `batch_size` (integer) - `num_epochs` (integer) - `block_size` (integer) - `warmup_ratio` (number) - `lora_r` (integer) - `lora_alpha` (integer) - `lora_dropout` (number) - `weight_decay` (number) - `gradient_accumulation` (integer) - `use_peft` (boolean) - `use_fp16` (boolean) - `use_int4` (boolean) - `push_to_hub` (boolean) - `repo_id` (string) - `preprocess_dataset` (boolean) - `prompt_column` (string) - `prompt_prefix` (string) - `prompt_suffix` (string) - `response_prefix` (string) - `date_created` (string, required) - `last_modified` (string, required)