NAME
OpenAPI::Client::OpenAI::Path::fine_tuning-jobs - Documentation for the /fine_tuning/jobs path.
DESCRIPTION
This document describes the API endpoint at /fine_tuning/jobs
.
PATHS
GET /fine_tuning/jobs
List your organization's fine-tuning jobs
Operation ID
listPaginatedFineTuningJobs
$client->listPaginatedFineTuningJobs( ... );
Parameters
after
(in query) (Optional) - Identifier for the last job from the previous pagination request.Type:
string
limit
(in query) (Optional) - Number of fine-tuning jobs to retrieve.Type:
integer
Default:
20
metadata
(in query) (Optional) - Optional metadata filter. To filter, use the syntax `metadata[k]=v`. Alternatively, set `metadata=null` to indicate no metadata.Type:
object
Responses
Status Code: 200
OK
Content Types:
application/json
Example (See the OpenAI spec for more detail):
{ "data" : [ "{\n \"object\": \"fine_tuning.job\",\n \"id\": \"ftjob-abc123\",\n \"model\": \"davinci-002\",\n \"created_at\": 1692661014,\n \"finished_at\": 1692661190,\n \"fine_tuned_model\": \"ft:davinci-002:my-org:custom_suffix:7q8mpxmy\",\n \"organization_id\": \"org-123\",\n \"result_files\": [\n \"file-abc123\"\n ],\n \"status\": \"succeeded\",\n \"validation_file\": null,\n \"training_file\": \"file-abc123\",\n \"hyperparameters\": {\n \"n_epochs\": 4,\n \"batch_size\": 1,\n \"learning_rate_multiplier\": 1.0\n },\n \"trained_tokens\": 5768,\n \"integrations\": [],\n \"seed\": 0,\n \"estimated_finish\": 0,\n \"method\": {\n \"type\": \"supervised\",\n \"supervised\": {\n \"hyperparameters\": {\n \"n_epochs\": 4,\n \"batch_size\": 1,\n \"learning_rate_multiplier\": 1.0\n }\n }\n },\n \"metadata\": {\n \"key\": \"value\"\n }\n}\n" ] }
POST /fine_tuning/jobs
Creates a fine-tuning job which begins the process of creating a new model from a given dataset.
Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
Operation ID
createFineTuningJob
$client->createFineTuningJob( ... );
Parameters
Request Body
Content Type: application/json
Models
The name of the model to fine-tune. You can select one of the supported models.
babbage-002
davinci-002
gpt-3.5-turbo
gpt-4o-mini
Example:
{
"hyperparameters" : null,
"integrations" : [
{
"wandb" : {
"project" : "my-wandb-project",
"tags" : [
"custom-tag"
]
}
}
],
"method" : {
"dpo" : {
"hyperparameters" : null
},
"supervised" : {
"hyperparameters" : null
}
},
"model" : "gpt-4o-mini",
"seed" : 42,
"training_file" : "file-abc123",
"validation_file" : "file-abc123"
}
Responses
Status Code: 200
OK
Content Types:
application/json
Example (See the OpenAI spec for more detail):
{ "object": "fine_tuning.job", "id": "ftjob-abc123", "model": "davinci-002", "created_at": 1692661014, "finished_at": 1692661190, "fine_tuned_model": "ft:davinci-002:my-org:custom_suffix:7q8mpxmy", "organization_id": "org-123", "result_files": [ "file-abc123" ], "status": "succeeded", "validation_file": null, "training_file": "file-abc123", "hyperparameters": { "n_epochs": 4, "batch_size": 1, "learning_rate_multiplier": 1.0 }, "trained_tokens": 5768, "integrations": [], "seed": 0, "estimated_finish": 0, "method": { "type": "supervised", "supervised": { "hyperparameters": { "n_epochs": 4, "batch_size": 1, "learning_rate_multiplier": 1.0 } } }, "metadata": { "key": "value" } }
SEE ALSO
COPYRIGHT AND LICENSE
Copyright (C) 2023-2025 by Nelson Ferraz
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.14.0 or, at your option, any later version of Perl 5 you may have available.