NAME
OpenAPI::Client::OpenAI::Schema - OpenAI API client Schema
DESCRIPTION
This module contains the schema for the OpenAI API client. To aid in comprehension, the schema has all references resolved. This makes this schema much larger than the original OpenAPI schema.
SCHEMA
---
components:
schemas:
AssistantObject:
description: Represents an `assistant` that can call the model and use tools.
properties: &1
created_at:
description: The Unix timestamp (in seconds) for when the assistant was
created.
type: integer
description:
description: |
The description of the assistant. The maximum length is 512 characters.
maxLength: 512
nullable: true
type: string
id:
description: The identifier, which can be referenced in API endpoints.
type: string
instructions:
description: |
The system instructions that the assistant uses. The maximum length is 256,000 characters.
maxLength: 256000
nullable: true
type: string
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
model:
description: |
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
type: string
name:
description: |
The name of the assistant. The maximum length is 256 characters.
maxLength: 256
nullable: true
type: string
object:
description: The object type, which is always `assistant`.
enum:
- assistant
type: string
response_format:
description: |
Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
nullable: true
oneOf: &2
- description: "`auto` is the default value\n"
enum:
- none
- auto
type: string
- description: |
An object describing the expected output of the model. If `json_object` only `function` type `tools` are allowed to be passed to the Run. If `text` the model can return text or any value needed.
properties: &3
type:
default: text
description: Must be one of `text` or `json_object`.
enum:
- text
- json_object
example: json_object
type: string
type: object
x-oaiExpandable: true
temperature:
default: 1
description: |
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
example: 1
maximum: 2
minimum: 0
nullable: true
type: number
tool_resources:
description: |
A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
nullable: true
properties:
code_interpreter:
properties:
file_ids:
default: []
description: |
A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter`` tool. There can be a maximum of 20 files associated with the tool.
items:
type: string
maxItems: 20
type: array
type: object
file_search:
properties:
vector_store_ids:
description: |
The ID of the [vector store](/docs/api-reference/vector-stores/object) attached to this assistant. There can be a maximum of 1 vector store attached to the assistant.
items:
type: string
maxItems: 1
type: array
type: object
type: object
tools:
default: []
description: |
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`.
items:
oneOf:
- properties: &4
type:
description: 'The type of tool being defined: `code_interpreter`'
enum:
- code_interpreter
type: string
required: &5
- type
title: Code interpreter tool
type: object
- properties: &6
type:
description: 'The type of tool being defined: `file_search`'
enum:
- file_search
type: string
required: &7
- type
title: FileSearch tool
type: object
- properties: &8
function:
properties: &9
description:
description: A description of what the function does, used by
the model to choose when and how to call the function.
type: string
name:
description: The name of the function to be called. Must be
a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum
length of 64.
type: string
parameters:
additionalProperties: true
description: "The parameters the functions accepts, described
as a JSON Schema object. See the [guide](/docs/guides/function-calling)
for examples, and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/)
for documentation about the format. \n\nOmitting `parameters`
defines a function with an empty parameter list."
type: object
required: &10
- name
type: object
type:
description: 'The type of tool being defined: `function`'
enum:
- function
type: string
required: &11
- type
- function
title: Function tool
type: object
x-oaiExpandable: true
maxItems: 128
type: array
top_p:
default: 1
description: |
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
example: 1
maximum: 1
minimum: 0
nullable: true
type: number
required: &12
- id
- object
- created_at
- name
- description
- model
- instructions
- tools
- metadata
title: Assistant
type: object
AssistantStreamEvent:
description: |
Represents an event emitted when streaming a Run.
Each event in a server-sent events stream has an `event` and `data` property:
```
event: thread.created
data: {"id": "thread_123", "object": "thread", ...}
```
We emit events whenever a new object is created, transitions to a new state, or is being
streamed in parts (deltas). For example, we emit `thread.run.created` when a new run
is created, `thread.run.completed` when a run completes, and so on. When an Assistant chooses
to create a message during a run, we emit a `thread.message.created event`, a
`thread.message.in_progress` event, many `thread.message.delta` events, and finally a
`thread.message.completed` event.
We may add additional events over time, so we recommend handling unknown events gracefully
in your code. See the [Assistants API quickstart](/docs/assistants/overview) to learn how to
integrate the Assistants API with streaming.
oneOf:
- oneOf: &13
- description: Occurs when a new [thread](/docs/api-reference/threads/object)
is created.
properties:
data:
description: Represents a thread that contains [messages](/docs/api-reference/messages).
properties: &14
created_at:
description: The Unix timestamp (in seconds) for when the thread
was created.
type: integer
id:
description: The identifier, which can be referenced in API endpoints.
type: string
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
object:
description: The object type, which is always `thread`.
enum:
- thread
type: string
tool_resources:
description: |
A set of resources that are made available to the assistant's tools in this thread. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
nullable: true
properties:
code_interpreter:
properties:
file_ids:
default: []
description: |
A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool.
items:
type: string
maxItems: 20
type: array
type: object
file_search:
properties:
vector_store_ids:
description: |
The [vector store](/docs/api-reference/vector-stores/object) attached to this thread. There can be a maximum of 1 vector store attached to the thread.
items:
type: string
maxItems: 1
type: array
type: object
type: object
required: &15
- id
- object
- created_at
- tool_resources
- metadata
title: Thread
type: object
event:
enum:
- thread.created
type: string
required:
- event
- data
type: object
- oneOf: &16
- description: Occurs when a new [run](/docs/api-reference/runs/object) is
created.
properties:
data:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: &17
assistant_id:
description: The ID of the [assistant](/docs/api-reference/assistants)
used for execution of this run.
type: string
cancelled_at:
description: The Unix timestamp (in seconds) for when the run was
cancelled.
nullable: true
type: integer
completed_at:
description: The Unix timestamp (in seconds) for when the run was
completed.
nullable: true
type: integer
created_at:
description: The Unix timestamp (in seconds) for when the run was
created.
type: integer
expires_at:
description: The Unix timestamp (in seconds) for when the run will
expire.
nullable: true
type: integer
failed_at:
description: The Unix timestamp (in seconds) for when the run failed.
nullable: true
type: integer
id:
description: The identifier, which can be referenced in API endpoints.
type: string
incomplete_details:
description: Details on why the run is incomplete. Will be `null`
if the run is not incomplete.
nullable: true
properties:
reason:
description: The reason why the run is incomplete. This will
point to which specific token limit was reached over the course
of the run.
enum:
- max_completion_tokens
- max_prompt_tokens
type: string
type: object
instructions:
description: The instructions that the [assistant](/docs/api-reference/assistants)
used for this run.
type: string
last_error:
description: The last error associated with this run. Will be `null`
if there are no errors.
nullable: true
properties:
code:
description: One of `server_error`, `rate_limit_exceeded`, or
`invalid_prompt`.
enum:
- server_error
- rate_limit_exceeded
- invalid_prompt
type: string
message:
description: A human-readable description of the error.
type: string
required:
- code
- message
type: object
max_completion_tokens:
description: |
The maximum number of completion tokens specified to have been used over the course of the run.
minimum: 256
nullable: true
type: integer
max_prompt_tokens:
description: |
The maximum number of prompt tokens specified to have been used over the course of the run.
minimum: 256
nullable: true
type: integer
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
model:
description: The model that the [assistant](/docs/api-reference/assistants)
used for this run.
type: string
object:
description: The object type, which is always `thread.run`.
enum:
- thread.run
type: string
required_action:
description: Details on the action required to continue the run.
Will be `null` if no action is required.
nullable: true
properties:
submit_tool_outputs:
description: Details on the tool outputs needed for this run
to continue.
properties:
tool_calls:
description: A list of the relevant tool calls.
items:
description: Tool call objects
properties: &18
function:
description: The function definition.
properties:
arguments:
description: The arguments that the model expects
you to pass to the function.
type: string
name:
description: The name of the function.
type: string
required:
- name
- arguments
type: object
id:
description: The ID of the tool call. This ID must
be referenced when you submit the tool outputs in
using the [Submit tool outputs to run](/docs/api-reference/runs/submitToolOutputs)
endpoint.
type: string
type:
description: The type of tool call the output is required
for. For now, this is always `function`.
enum:
- function
type: string
required: &19
- id
- type
- function
type: object
type: array
required:
- tool_calls
type: object
type:
description: For now, this is always `submit_tool_outputs`.
enum:
- submit_tool_outputs
type: string
required:
- type
- submit_tool_outputs
type: object
response_format:
description: |
Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
nullable: true
oneOf: *2
x-oaiExpandable: true
started_at:
description: The Unix timestamp (in seconds) for when the run was
started.
nullable: true
type: integer
status:
description: The status of the run, which can be either `queued`,
`in_progress`, `requires_action`, `cancelling`, `cancelled`, `failed`,
`completed`, `incomplete`, or `expired`.
enum:
- queued
- in_progress
- requires_action
- cancelling
- cancelled
- failed
- completed
- incomplete
- expired
type: string
temperature:
description: The sampling temperature used for this run. If not
set, defaults to 1.
nullable: true
type: number
thread_id:
description: The ID of the [thread](/docs/api-reference/threads)
that was executed on as a part of this run.
type: string
tool_choice:
description: |
Controls which (if any) tool is called by the model.
`none` means the model will not call any tools and instead generates a message.
`auto` is the default value and means the model can pick between generating a message or calling one or more tools.
`required` means the model must call one or more tools before responding to the user.
Specifying a particular tool like `{"type": "file_search"}` or `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool.
nullable: true
oneOf: &20
- description: |
`none` means the model will not call any tools and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `required` means the model must call one or more tools before responding to the user.
enum:
- none
- auto
- required
type: string
- description: Specifies a tool the model should use. Use to force
the model to call a specific tool.
properties: &21
function:
properties:
name:
description: The name of the function to call.
type: string
required:
- name
type: object
type:
description: The type of the tool. If type is `function`,
the function name must be set
enum:
- function
- code_interpreter
- file_search
type: string
required: &22
- type
type: object
x-oaiExpandable: true
tools:
default: []
description: The list of tools that the [assistant](/docs/api-reference/assistants)
used for this run.
items:
oneOf:
- properties: *4
required: *5
title: Code interpreter tool
type: object
- properties: *6
required: *7
title: FileSearch tool
type: object
- properties: *8
required: *11
title: Function tool
type: object
x-oaiExpandable: true
maxItems: 20
type: array
top_p:
description: The nucleus sampling value used for this run. If not
set, defaults to 1.
nullable: true
type: number
truncation_strategy:
description: Controls for how a thread will be truncated prior to
the run. Use this to control the intial context window of the
run.
nullable: true
properties: &23
last_messages:
description: The number of most recent messages from the thread
when constructing the context for the run.
minimum: 1
nullable: true
type: integer
type:
description: The truncation strategy to use for the thread.
The default is `auto`. If set to `last_messages`, the thread
will be truncated to the n most recent messages in the thread.
When set to `auto`, messages in the middle of the thread will
be dropped to fit the context length of the model, `max_prompt_tokens`.
enum:
- auto
- last_messages
type: string
required: &24
- type
title: Thread Truncation Controls
type: object
usage:
description: Usage statistics related to the run. This value will
be `null` if the run is not in a terminal state (i.e. `in_progress`,
`queued`, etc.).
nullable: true
properties: &25
completion_tokens:
description: Number of completion tokens used over the course
of the run.
type: integer
prompt_tokens:
description: Number of prompt tokens used over the course of
the run.
type: integer
total_tokens:
description: Total number of tokens used (prompt + completion).
type: integer
required: &26
- prompt_tokens
- completion_tokens
- total_tokens
type: object
required: &27
- id
- object
- created_at
- thread_id
- assistant_id
- status
- required_action
- last_error
- expires_at
- started_at
- cancelled_at
- failed_at
- completed_at
- model
- instructions
- tools
- metadata
- usage
- incomplete_details
- max_prompt_tokens
- max_completion_tokens
- truncation_strategy
- tool_choice
- response_format
title: A run on a thread
type: object
event:
enum:
- thread.run.created
type: string
required:
- event
- data
type: object
- description: Occurs when a [run](/docs/api-reference/runs/object) moves
to a `queued` status.
properties:
data:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
event:
enum:
- thread.run.queued
type: string
required:
- event
- data
type: object
- description: Occurs when a [run](/docs/api-reference/runs/object) moves
to an `in_progress` status.
properties:
data:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
event:
enum:
- thread.run.in_progress
type: string
required:
- event
- data
type: object
- description: Occurs when a [run](/docs/api-reference/runs/object) moves
to a `requires_action` status.
properties:
data:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
event:
enum:
- thread.run.requires_action
type: string
required:
- event
- data
type: object
- description: Occurs when a [run](/docs/api-reference/runs/object) is completed.
properties:
data:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
event:
enum:
- thread.run.completed
type: string
required:
- event
- data
type: object
- description: Occurs when a [run](/docs/api-reference/runs/object) ends with
status `incomplete`.
properties:
data:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
event:
enum:
- thread.run.incomplete
type: string
required:
- event
- data
type: object
- description: Occurs when a [run](/docs/api-reference/runs/object) fails.
properties:
data:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
event:
enum:
- thread.run.failed
type: string
required:
- event
- data
type: object
- description: Occurs when a [run](/docs/api-reference/runs/object) moves
to a `cancelling` status.
properties:
data:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
event:
enum:
- thread.run.cancelling
type: string
required:
- event
- data
type: object
- description: Occurs when a [run](/docs/api-reference/runs/object) is cancelled.
properties:
data:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
event:
enum:
- thread.run.cancelled
type: string
required:
- event
- data
type: object
- description: Occurs when a [run](/docs/api-reference/runs/object) expires.
properties:
data:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
event:
enum:
- thread.run.expired
type: string
required:
- event
- data
type: object
- oneOf: &28
- description: Occurs when a [run step](/docs/api-reference/runs/step-object)
is created.
properties:
data:
description: |
Represents a step in execution of a run.
properties: &29
assistant_id:
description: The ID of the [assistant](/docs/api-reference/assistants)
associated with the run step.
type: string
cancelled_at:
description: The Unix timestamp (in seconds) for when the run step
was cancelled.
nullable: true
type: integer
completed_at:
description: The Unix timestamp (in seconds) for when the run step
completed.
nullable: true
type: integer
created_at:
description: The Unix timestamp (in seconds) for when the run step
was created.
type: integer
expired_at:
description: The Unix timestamp (in seconds) for when the run step
expired. A step is considered expired if the parent run is expired.
nullable: true
type: integer
failed_at:
description: The Unix timestamp (in seconds) for when the run step
failed.
nullable: true
type: integer
id:
description: The identifier of the run step, which can be referenced
in API endpoints.
type: string
last_error:
description: The last error associated with this run step. Will
be `null` if there are no errors.
nullable: true
properties:
code:
description: One of `server_error` or `rate_limit_exceeded`.
enum:
- server_error
- rate_limit_exceeded
type: string
message:
description: A human-readable description of the error.
type: string
required:
- code
- message
type: object
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
object:
description: The object type, which is always `thread.run.step`.
enum:
- thread.run.step
type: string
run_id:
description: The ID of the [run](/docs/api-reference/runs) that
this run step is a part of.
type: string
status:
description: The status of the run step, which can be either `in_progress`,
`cancelled`, `failed`, `completed`, or `expired`.
enum:
- in_progress
- cancelled
- failed
- completed
- expired
type: string
step_details:
description: The details of the run step.
oneOf:
- description: Details of the message creation by the run step.
properties: &30
message_creation:
properties:
message_id:
description: The ID of the message that was created by
this run step.
type: string
required:
- message_id
type: object
type:
description: Always `message_creation`.
enum:
- message_creation
type: string
required: &31
- type
- message_creation
title: Message creation
type: object
- description: Details of the tool call.
properties: &32
tool_calls:
description: |
An array of tool calls the run step was involved in. These can be associated with one of three types of tools: `code_interpreter`, `file_search`, or `function`.
items:
oneOf:
- description: Details of the Code Interpreter tool call
the run step was involved in.
properties: &33
code_interpreter:
description: The Code Interpreter tool call definition.
properties:
input:
description: The input to the Code Interpreter
tool call.
type: string
outputs:
description: The outputs from the Code Interpreter
tool call. Code Interpreter can output one or
more items, including text (`logs`) or images
(`image`). Each of these are represented by
a different object type.
items:
oneOf:
- description: Text output from the Code Interpreter
tool call as part of a run step.
properties: &34
logs:
description: The text output from the
Code Interpreter tool call.
type: string
type:
description: Always `logs`.
enum:
- logs
type: string
required: &35
- type
- logs
title: Code Interpreter log output
type: object
- properties: &36
image:
properties:
file_id:
description: The [file](/docs/api-reference/files)
ID of the image.
type: string
required:
- file_id
type: object
type:
description: Always `image`.
enum:
- image
type: string
required: &37
- type
- image
title: Code Interpreter image output
type: object
type: object
x-oaiExpandable: true
type: array
required:
- input
- outputs
type: object
id:
description: The ID of the tool call.
type: string
type:
description: The type of tool call. This is always
going to be `code_interpreter` for this type of
tool call.
enum:
- code_interpreter
type: string
required: &38
- id
- type
- code_interpreter
title: Code Interpreter tool call
type: object
- properties: &39
file_search:
description: For now, this is always going to be an
empty object.
type: object
x-oaiTypeLabel: map
id:
description: The ID of the tool call object.
type: string
type:
description: The type of tool call. This is always
going to be `file_search` for this type of tool
call.
enum:
- file_search
type: string
required: &40
- id
- type
- file_search
title: File search tool call
type: object
- properties: &41
function:
description: The definition of the function that was
called.
properties:
arguments:
description: The arguments passed to the function.
type: string
name:
description: The name of the function.
type: string
output:
description: The output of the function. This
will be `null` if the outputs have not been
[submitted](/docs/api-reference/runs/submitToolOutputs)
yet.
nullable: true
type: string
required:
- name
- arguments
- output
type: object
id:
description: The ID of the tool call object.
type: string
type:
description: The type of tool call. This is always
going to be `function` for this type of tool call.
enum:
- function
type: string
required: &42
- id
- type
- function
title: Function tool call
type: object
x-oaiExpandable: true
type: array
type:
description: Always `tool_calls`.
enum:
- tool_calls
type: string
required: &43
- type
- tool_calls
title: Tool calls
type: object
type: object
x-oaiExpandable: true
thread_id:
description: The ID of the [thread](/docs/api-reference/threads)
that was run.
type: string
type:
description: The type of run step, which can be either `message_creation`
or `tool_calls`.
enum:
- message_creation
- tool_calls
type: string
usage:
description: Usage statistics related to the run step. This value
will be `null` while the run step's status is `in_progress`.
nullable: true
properties: &44
completion_tokens:
description: Number of completion tokens used over the course
of the run step.
type: integer
prompt_tokens:
description: Number of prompt tokens used over the course of
the run step.
type: integer
total_tokens:
description: Total number of tokens used (prompt + completion).
type: integer
required: &45
- prompt_tokens
- completion_tokens
- total_tokens
type: object
required: &46
- id
- object
- created_at
- assistant_id
- thread_id
- run_id
- type
- status
- step_details
- last_error
- expired_at
- cancelled_at
- failed_at
- completed_at
- metadata
- usage
title: Run steps
type: object
event:
enum:
- thread.run.step.created
type: string
required:
- event
- data
type: object
- description: Occurs when a [run step](/docs/api-reference/runs/step-object)
moves to an `in_progress` state.
properties:
data:
description: |
Represents a step in execution of a run.
properties: *29
required: *46
title: Run steps
type: object
event:
enum:
- thread.run.step.in_progress
type: string
required:
- event
- data
type: object
- description: Occurs when parts of a [run step](/docs/api-reference/runs/step-object)
are being streamed.
properties:
data:
description: |
Represents a run step delta i.e. any changed fields on a run step during streaming.
properties: &47
delta:
description: The delta containing the fields that have changed on
the run step.
properties:
step_details:
description: The details of the run step.
oneOf:
- description: Details of the message creation by the run step.
properties: &48
message_creation:
properties:
message_id:
description: The ID of the message that was created
by this run step.
type: string
type: object
type:
description: Always `message_creation`.
enum:
- message_creation
type: string
required: &49
- type
title: Message creation
type: object
- description: Details of the tool call.
properties: &50
tool_calls:
description: |
An array of tool calls the run step was involved in. These can be associated with one of three types of tools: `code_interpreter`, `file_search`, or `function`.
items:
oneOf:
- description: Details of the Code Interpreter tool
call the run step was involved in.
properties: &51
code_interpreter:
description: The Code Interpreter tool call definition.
properties:
input:
description: The input to the Code Interpreter
tool call.
type: string
outputs:
description: The outputs from the Code Interpreter
tool call. Code Interpreter can output one
or more items, including text (`logs`) or
images (`image`). Each of these are represented
by a different object type.
items:
oneOf:
- description: Text output from the Code
Interpreter tool call as part of a run
step.
properties: &52
index:
description: The index of the output
in the outputs array.
type: integer
logs:
description: The text output from
the Code Interpreter tool call.
type: string
type:
description: Always `logs`.
enum:
- logs
type: string
required: &53
- index
- type
title: Code interpreter log output
type: object
- properties: &54
image:
properties:
file_id:
description: The [file](/docs/api-reference/files)
ID of the image.
type: string
type: object
index:
description: The index of the output
in the outputs array.
type: integer
type:
description: Always `image`.
enum:
- image
type: string
required: &55
- index
- type
title: Code interpreter image output
type: object
type: object
x-oaiExpandable: true
type: array
type: object
id:
description: The ID of the tool call.
type: string
index:
description: The index of the tool call in the
tool calls array.
type: integer
type:
description: The type of tool call. This is always
going to be `code_interpreter` for this type
of tool call.
enum:
- code_interpreter
type: string
required: &56
- index
- type
title: Code interpreter tool call
type: object
- properties: &57
file_search:
description: For now, this is always going to
be an empty object.
type: object
x-oaiTypeLabel: map
id:
description: The ID of the tool call object.
type: string
index:
description: The index of the tool call in the
tool calls array.
type: integer
type:
description: The type of tool call. This is always
going to be `file_search` for this type of tool
call.
enum:
- file_search
type: string
required: &58
- index
- type
- file_search
title: File search tool call
type: object
- properties: &59
function:
description: The definition of the function that
was called.
properties:
arguments:
description: The arguments passed to the function.
type: string
name:
description: The name of the function.
type: string
output:
description: The output of the function. This
will be `null` if the outputs have not been
[submitted](/docs/api-reference/runs/submitToolOutputs)
yet.
nullable: true
type: string
type: object
id:
description: The ID of the tool call object.
type: string
index:
description: The index of the tool call in the
tool calls array.
type: integer
type:
description: The type of tool call. This is always
going to be `function` for this type of tool
call.
enum:
- function
type: string
required: &60
- index
- type
title: Function tool call
type: object
x-oaiExpandable: true
type: array
type:
description: Always `tool_calls`.
enum:
- tool_calls
type: string
required: &61
- type
title: Tool calls
type: object
type: object
x-oaiExpandable: true
type: object
id:
description: The identifier of the run step, which can be referenced
in API endpoints.
type: string
object:
description: The object type, which is always `thread.run.step.delta`.
enum:
- thread.run.step.delta
type: string
required: &62
- id
- object
- delta
title: Run step delta object
type: object
event:
enum:
- thread.run.step.delta
type: string
required:
- event
- data
type: object
- description: Occurs when a [run step](/docs/api-reference/runs/step-object)
is completed.
properties:
data:
description: |
Represents a step in execution of a run.
properties: *29
required: *46
title: Run steps
type: object
event:
enum:
- thread.run.step.completed
type: string
required:
- event
- data
type: object
- description: Occurs when a [run step](/docs/api-reference/runs/step-object)
fails.
properties:
data:
description: |
Represents a step in execution of a run.
properties: *29
required: *46
title: Run steps
type: object
event:
enum:
- thread.run.step.failed
type: string
required:
- event
- data
type: object
- description: Occurs when a [run step](/docs/api-reference/runs/step-object)
is cancelled.
properties:
data:
description: |
Represents a step in execution of a run.
properties: *29
required: *46
title: Run steps
type: object
event:
enum:
- thread.run.step.cancelled
type: string
required:
- event
- data
type: object
- description: Occurs when a [run step](/docs/api-reference/runs/step-object)
expires.
properties:
data:
description: |
Represents a step in execution of a run.
properties: *29
required: *46
title: Run steps
type: object
event:
enum:
- thread.run.step.expired
type: string
required:
- event
- data
type: object
- oneOf: &63
- description: Occurs when a [message](/docs/api-reference/messages/object)
is created.
properties:
data:
description: Represents a message within a [thread](/docs/api-reference/threads).
properties: &64
assistant_id:
description: If applicable, the ID of the [assistant](/docs/api-reference/assistants)
that authored this message.
nullable: true
type: string
attachments:
description: A list of files attached to the message, and the tools
they were added to.
items:
properties:
file_id:
description: The ID of the file to attach to the message.
type: string
tools:
description: The tools to add this file to.
items:
oneOf:
- properties: *4
required: *5
title: Code interpreter tool
type: object
- properties: *6
required: *7
title: FileSearch tool
type: object
x-oaiExpandable: true
type: array
type: object
nullable: true
type: array
completed_at:
description: The Unix timestamp (in seconds) for when the message
was completed.
nullable: true
type: integer
content:
description: The content of the message in array of text and/or
images.
items:
oneOf:
- description: References an image [File](/docs/api-reference/files)
in the content of a message.
properties: &65
image_file:
properties:
detail:
default: auto
description: Specifies the detail level of the image
if specified by the user. `low` uses fewer tokens,
you can opt in to high resolution using `high`.
enum:
- auto
- low
- high
type: string
file_id:
description: The [File](/docs/api-reference/files) ID
of the image in the message content. Set `purpose="vision"`
when uploading the File if you need to later display
the file content.
type: string
required:
- file_id
type: object
type:
description: Always `image_file`.
enum:
- image_file
type: string
required: &66
- type
- image_file
title: Image file
type: object
- description: References an image URL in the content of a message.
properties: &67
image_url:
properties:
detail:
default: auto
description: Specifies the detail level of the image.
`low` uses fewer tokens, you can opt in to high resolution
using `high`. Default value is `auto`
enum:
- auto
- low
- high
type: string
url:
description: 'The external URL of the image, must be
a supported image types: jpeg, jpg, png, gif, webp.'
format: uri
type: string
required:
- url
type: object
type:
description: The type of the content part.
enum:
- image_url
type: string
required: &68
- type
- image_url
title: Image URL
type: object
- description: The text content that is part of a message.
properties: &69
text:
properties:
annotations:
items:
oneOf:
- description: A citation within the message that
points to a specific quote from a specific File
associated with the assistant or the message.
Generated when the assistant uses the "file_search"
tool to search files.
properties: &70
end_index:
minimum: 0
type: integer
file_citation:
properties:
file_id:
description: The ID of the specific File
the citation is from.
type: string
quote:
description: The specific quote in the file.
type: string
required:
- file_id
- quote
type: object
start_index:
minimum: 0
type: integer
text:
description: The text in the message content
that needs to be replaced.
type: string
type:
description: Always `file_citation`.
enum:
- file_citation
type: string
required: &71
- type
- text
- file_citation
- start_index
- end_index
title: File citation
type: object
- description: A URL for the file that's generated
when the assistant used the `code_interpreter`
tool to generate a file.
properties: &72
end_index:
minimum: 0
type: integer
file_path:
properties:
file_id:
description: The ID of the file that was
generated.
type: string
required:
- file_id
type: object
start_index:
minimum: 0
type: integer
text:
description: The text in the message content
that needs to be replaced.
type: string
type:
description: Always `file_path`.
enum:
- file_path
type: string
required: &73
- type
- text
- file_path
- start_index
- end_index
title: File path
type: object
x-oaiExpandable: true
type: array
value:
description: The data that makes up the text.
type: string
required:
- value
- annotations
type: object
type:
description: Always `text`.
enum:
- text
type: string
required: &74
- type
- text
title: Text
type: object
x-oaiExpandable: true
type: array
created_at:
description: The Unix timestamp (in seconds) for when the message
was created.
type: integer
id:
description: The identifier, which can be referenced in API endpoints.
type: string
incomplete_at:
description: The Unix timestamp (in seconds) for when the message
was marked as incomplete.
nullable: true
type: integer
incomplete_details:
description: On an incomplete message, details about why the message
is incomplete.
nullable: true
properties:
reason:
description: The reason the message is incomplete.
enum:
- content_filter
- max_tokens
- run_cancelled
- run_expired
- run_failed
type: string
required:
- reason
type: object
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
object:
description: The object type, which is always `thread.message`.
enum:
- thread.message
type: string
role:
description: The entity that produced the message. One of `user`
or `assistant`.
enum:
- user
- assistant
type: string
run_id:
description: The ID of the [run](/docs/api-reference/runs) associated
with the creation of this message. Value is `null` when messages
are created manually using the create message or create thread
endpoints.
nullable: true
type: string
status:
description: The status of the message, which can be either `in_progress`,
`incomplete`, or `completed`.
enum:
- in_progress
- incomplete
- completed
type: string
thread_id:
description: The [thread](/docs/api-reference/threads) ID that this
message belongs to.
type: string
required: &75
- id
- object
- created_at
- thread_id
- status
- incomplete_details
- completed_at
- incomplete_at
- role
- content
- assistant_id
- run_id
- attachments
- metadata
title: The message object
type: object
event:
enum:
- thread.message.created
type: string
required:
- event
- data
type: object
- description: Occurs when a [message](/docs/api-reference/messages/object)
moves to an `in_progress` state.
properties:
data:
description: Represents a message within a [thread](/docs/api-reference/threads).
properties: *64
required: *75
title: The message object
type: object
event:
enum:
- thread.message.in_progress
type: string
required:
- event
- data
type: object
- description: Occurs when parts of a [Message](/docs/api-reference/messages/object)
are being streamed.
properties:
data:
description: |
Represents a message delta i.e. any changed fields on a message during streaming.
properties: &76
delta:
description: The delta containing the fields that have changed on
the Message.
properties:
content:
description: The content of the message in array of text and/or
images.
items:
oneOf:
- description: References an image [File](/docs/api-reference/files)
in the content of a message.
properties: &77
image_file:
properties:
detail:
default: auto
description: Specifies the detail level of the image
if specified by the user. `low` uses fewer tokens,
you can opt in to high resolution using `high`.
enum:
- auto
- low
- high
type: string
file_id:
description: The [File](/docs/api-reference/files)
ID of the image in the message content. Set `purpose="vision"`
when uploading the File if you need to later display
the file content.
type: string
type: object
index:
description: The index of the content part in the message.
type: integer
type:
description: Always `image_file`.
enum:
- image_file
type: string
required: &78
- index
- type
title: Image file
type: object
- description: The text content that is part of a message.
properties: &79
index:
description: The index of the content part in the message.
type: integer
text:
properties:
annotations:
items:
oneOf:
- description: A citation within the message that
points to a specific quote from a specific
File associated with the assistant or the
message. Generated when the assistant uses
the "file_search" tool to search files.
properties: &80
end_index:
minimum: 0
type: integer
file_citation:
properties:
file_id:
description: The ID of the specific
File the citation is from.
type: string
quote:
description: The specific quote in the
file.
type: string
type: object
index:
description: The index of the annotation
in the text content part.
type: integer
start_index:
minimum: 0
type: integer
text:
description: The text in the message content
that needs to be replaced.
type: string
type:
description: Always `file_citation`.
enum:
- file_citation
type: string
required: &81
- index
- type
title: File citation
type: object
- description: A URL for the file that's generated
when the assistant used the `code_interpreter`
tool to generate a file.
properties: &82
end_index:
minimum: 0
type: integer
file_path:
properties:
file_id:
description: The ID of the file that
was generated.
type: string
type: object
index:
description: The index of the annotation
in the text content part.
type: integer
start_index:
minimum: 0
type: integer
text:
description: The text in the message content
that needs to be replaced.
type: string
type:
description: Always `file_path`.
enum:
- file_path
type: string
required: &83
- index
- type
title: File path
type: object
x-oaiExpandable: true
type: array
value:
description: The data that makes up the text.
type: string
type: object
type:
description: Always `text`.
enum:
- text
type: string
required: &84
- index
- type
title: Text
type: object
- description: References an image URL in the content of a
message.
properties: &85
image_url:
properties:
detail:
default: auto
description: Specifies the detail level of the image.
`low` uses fewer tokens, you can opt in to high
resolution using `high`.
enum:
- auto
- low
- high
type: string
url:
description: 'The URL of the image, must be a supported
image types: jpeg, jpg, png, gif, webp.'
type: string
type: object
index:
description: The index of the content part in the message.
type: integer
type:
description: Always `image_url`.
enum:
- image_url
type: string
required: &86
- index
- type
title: Image URL
type: object
x-oaiExpandable: true
type: array
role:
description: The entity that produced the message. One of `user`
or `assistant`.
enum:
- user
- assistant
type: string
type: object
id:
description: The identifier of the message, which can be referenced
in API endpoints.
type: string
object:
description: The object type, which is always `thread.message.delta`.
enum:
- thread.message.delta
type: string
required: &87
- id
- object
- delta
title: Message delta object
type: object
event:
enum:
- thread.message.delta
type: string
required:
- event
- data
type: object
- description: Occurs when a [message](/docs/api-reference/messages/object)
is completed.
properties:
data:
description: Represents a message within a [thread](/docs/api-reference/threads).
properties: *64
required: *75
title: The message object
type: object
event:
enum:
- thread.message.completed
type: string
required:
- event
- data
type: object
- description: Occurs when a [message](/docs/api-reference/messages/object)
ends before it is completed.
properties:
data:
description: Represents a message within a [thread](/docs/api-reference/threads).
properties: *64
required: *75
title: The message object
type: object
event:
enum:
- thread.message.incomplete
type: string
required:
- event
- data
type: object
- description: Occurs when an [error](/docs/guides/error-codes/api-errors) occurs.
This can happen due to an internal server error or a timeout.
properties: &88
data:
properties: &89
code:
nullable: true
type: string
message:
nullable: false
type: string
param:
nullable: true
type: string
type:
nullable: false
type: string
required: &90
- type
- message
- param
- code
type: object
event:
enum:
- error
type: string
required: &91
- event
- data
type: object
- description: Occurs when a stream ends.
properties: &92
data:
enum:
- '[DONE]'
type: string
event:
enum:
- done
type: string
required: &93
- event
- data
type: object
AssistantToolsCode:
properties: *4
required: *5
title: Code interpreter tool
type: object
AssistantToolsFileSearch:
properties: *6
required: *7
title: FileSearch tool
type: object
AssistantToolsFunction:
properties: *8
required: *11
title: Function tool
type: object
AssistantsApiResponseFormat:
description: |
An object describing the expected output of the model. If `json_object` only `function` type `tools` are allowed to be passed to the Run. If `text` the model can return text or any value needed.
properties: *3
type: object
AssistantsApiResponseFormatOption:
description: |
Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
oneOf: *2
x-oaiExpandable: true
AssistantsApiToolChoiceOption:
description: |
Controls which (if any) tool is called by the model.
`none` means the model will not call any tools and instead generates a message.
`auto` is the default value and means the model can pick between generating a message or calling one or more tools.
`required` means the model must call one or more tools before responding to the user.
Specifying a particular tool like `{"type": "file_search"}` or `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool.
oneOf: *20
x-oaiExpandable: true
AssistantsNamedToolChoice:
description: Specifies a tool the model should use. Use to force the model to
call a specific tool.
properties: *21
required: *22
type: object
Batch:
properties: &94
cancelled_at:
description: The Unix timestamp (in seconds) for when the batch was cancelled.
type: integer
cancelling_at:
description: The Unix timestamp (in seconds) for when the batch started
cancelling.
type: integer
completed_at:
description: The Unix timestamp (in seconds) for when the batch was completed.
type: integer
completion_window:
description: The time frame within which the batch should be processed.
type: string
created_at:
description: The Unix timestamp (in seconds) for when the batch was created.
type: integer
endpoint:
description: The OpenAI API endpoint used by the batch.
type: string
error_file_id:
description: The ID of the file containing the outputs of requests with
errors.
type: string
errors:
properties:
data:
items:
properties:
code:
description: An error code identifying the error type.
type: string
line:
description: The line number of the input file where the error
occurred, if applicable.
nullable: true
type: integer
message:
description: A human-readable message providing more details about
the error.
type: string
param:
description: The name of the parameter that caused the error,
if applicable.
nullable: true
type: string
type: object
type: array
object:
description: The object type, which is always `list`.
type: string
type: object
expired_at:
description: The Unix timestamp (in seconds) for when the batch expired.
type: integer
expires_at:
description: The Unix timestamp (in seconds) for when the batch will expire.
type: integer
failed_at:
description: The Unix timestamp (in seconds) for when the batch failed.
type: integer
finalizing_at:
description: The Unix timestamp (in seconds) for when the batch started
finalizing.
type: integer
id:
type: string
in_progress_at:
description: The Unix timestamp (in seconds) for when the batch started
processing.
type: integer
input_file_id:
description: The ID of the input file for the batch.
type: string
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
object:
description: The object type, which is always `batch`.
enum:
- batch
type: string
output_file_id:
description: The ID of the file containing the outputs of successfully executed
requests.
type: string
request_counts:
description: The request counts for different statuses within the batch.
properties:
completed:
description: Number of requests that have been completed successfully.
type: integer
failed:
description: Number of requests that have failed.
type: integer
total:
description: Total number of requests in the batch.
type: integer
required:
- total
- completed
- failed
type: object
status:
description: The current status of the batch.
enum:
- validating
- failed
- in_progress
- finalizing
- completed
- expired
- cancelling
- cancelled
type: string
required: &95
- id
- object
- endpoint
- input_file_id
- completion_window
- status
- created_at
type: object
BatchRequestInput:
description: The per-line object of the batch input file
properties:
custom_id:
description: A developer-provided per-request id that will be used to match
outputs to inputs. Must be unique for each request in a batch.
type: string
method:
description: The HTTP method to be used for the request. Currently only
`POST` is supported.
enum:
- POST
type: string
url:
description: The OpenAI API relative URL to be used for the request. Currently
`/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions` are supported.
type: string
type: object
BatchRequestOutput:
description: The per-line object of the batch output and error files
properties:
custom_id:
description: A developer-provided per-request id that will be used to match
outputs to inputs.
type: string
error:
description: For requests that failed with a non-HTTP error, this will contain
more information on the cause of the failure.
nullable: true
properties:
code:
description: A machine-readable error code.
type: string
message:
description: A human-readable error message.
type: string
type: object
id:
type: string
response:
nullable: true
properties:
body:
description: The JSON body of the response
type: object
x-oaiTypeLabel: map
request_id:
description: An unique identifier for the OpenAI API request. Please
include this request ID when contacting support.
type: string
status_code:
description: The HTTP status code of the response
type: integer
type: object
type: object
ChatCompletionFunctionCallOption:
description: |
Specifying a particular function via `{"name": "my_function"}` forces the model to call that function.
properties: &96
name:
description: The name of the function to call.
type: string
required: &97
- name
type: object
ChatCompletionFunctions:
deprecated: true
properties: &98
description:
description: A description of what the function does, used by the model
to choose when and how to call the function.
type: string
name:
description: The name of the function to be called. Must be a-z, A-Z, 0-9,
or contain underscores and dashes, with a maximum length of 64.
type: string
parameters:
additionalProperties: true
description: "The parameters the functions accepts, described as a JSON
Schema object. See the [guide](/docs/guides/function-calling) for examples,
and the [JSON Schema reference](https://json-schema.org/understanding-json-schema/)
for documentation about the format. \n\nOmitting `parameters` defines
a function with an empty parameter list."
type: object
required: &99
- name
type: object
ChatCompletionMessageToolCall:
properties: &100
function:
description: The function that the model called.
properties:
arguments:
description: The arguments to call the function with, as generated by
the model in JSON format. Note that the model does not always generate
valid JSON, and may hallucinate parameters not defined by your function
schema. Validate the arguments in your code before calling your function.
type: string
name:
description: The name of the function to call.
type: string
required:
- name
- arguments
type: object
id:
description: The ID of the tool call.
type: string
type:
description: The type of the tool. Currently, only `function` is supported.
enum:
- function
type: string
required: &101
- id
- type
- function
type: object
ChatCompletionMessageToolCallChunk:
properties: &102
function:
properties:
arguments:
description: The arguments to call the function with, as generated by
the model in JSON format. Note that the model does not always generate
valid JSON, and may hallucinate parameters not defined by your function
schema. Validate the arguments in your code before calling your function.
type: string
name:
description: The name of the function to call.
type: string
type: object
id:
description: The ID of the tool call.
type: string
index:
type: integer
type:
description: The type of the tool. Currently, only `function` is supported.
enum:
- function
type: string
required: &103
- index
type: object
ChatCompletionMessageToolCalls:
description: The tool calls generated by the model, such as function calls.
items: &104
properties: *100
required: *101
type: object
type: array
ChatCompletionNamedToolChoice:
description: Specifies a tool the model should use. Use to force the model to
call a specific function.
properties: &105
function:
properties:
name:
description: The name of the function to call.
type: string
required:
- name
type: object
type:
description: The type of the tool. Currently, only `function` is supported.
enum:
- function
type: string
required: &106
- type
- function
type: object
ChatCompletionRequestAssistantMessage:
properties: &107
content:
description: |
The contents of the assistant message. Required unless `tool_calls` or `function_call` is specified.
nullable: true
type: string
function_call:
deprecated: true
description: Deprecated and replaced by `tool_calls`. The name and arguments
of a function that should be called, as generated by the model.
nullable: true
properties:
arguments:
description: The arguments to call the function with, as generated by
the model in JSON format. Note that the model does not always generate
valid JSON, and may hallucinate parameters not defined by your function
schema. Validate the arguments in your code before calling your function.
type: string
name:
description: The name of the function to call.
type: string
required:
- arguments
- name
type: object
name:
description: An optional name for the participant. Provides the model information
to differentiate between participants of the same role.
type: string
role:
description: The role of the messages author, in this case `assistant`.
enum:
- assistant
type: string
tool_calls:
description: The tool calls generated by the model, such as function calls.
items: *104
type: array
required: &108
- role
title: Assistant message
type: object
ChatCompletionRequestFunctionMessage:
deprecated: true
properties: &109
content:
description: The contents of the function message.
nullable: true
type: string
name:
description: The name of the function to call.
type: string
role:
description: The role of the messages author, in this case `function`.
enum:
- function
type: string
required: &110
- role
- content
- name
title: Function message
type: object
ChatCompletionRequestMessage:
oneOf: &111
- properties: &112
content:
description: The contents of the system message.
type: string
name:
description: An optional name for the participant. Provides the model
information to differentiate between participants of the same role.
type: string
role:
description: The role of the messages author, in this case `system`.
enum:
- system
type: string
required: &113
- content
- role
title: System message
type: object
- properties: &114
content:
description: |
The contents of the user message.
oneOf:
- description: The text contents of the message.
title: Text content
type: string
- description: An array of content parts with a defined type, each can
be of type `text` or `image_url` when passing in images. You can pass
multiple images by adding multiple `image_url` content parts. Image
input is only supported when using the `gpt-4-visual-preview` model.
items:
oneOf: &115
- properties: &116
text:
description: The text content.
type: string
type:
description: The type of the content part.
enum:
- text
type: string
required: &117
- type
- text
title: Text content part
type: object
- properties: &118
image_url:
properties:
detail:
default: auto
description: Specifies the detail level of the image. Learn
more in the [Vision guide](/docs/guides/vision/low-or-high-fidelity-image-understanding).
enum:
- auto
- low
- high
type: string
url:
description: Either a URL of the image or the base64 encoded
image data.
format: uri
type: string
required:
- url
type: object
type:
description: The type of the content part.
enum:
- image_url
type: string
required: &119
- type
- image_url
title: Image content part
type: object
x-oaiExpandable: true
minItems: 1
title: Array of content parts
type: array
x-oaiExpandable: true
name:
description: An optional name for the participant. Provides the model
information to differentiate between participants of the same role.
type: string
role:
description: The role of the messages author, in this case `user`.
enum:
- user
type: string
required: &120
- content
- role
title: User message
type: object
- properties: *107
required: *108
title: Assistant message
type: object
- properties: &121
content:
description: The contents of the tool message.
type: string
role:
description: The role of the messages author, in this case `tool`.
enum:
- tool
type: string
tool_call_id:
description: Tool call that this message is responding to.
type: string
required: &122
- role
- content
- tool_call_id
title: Tool message
type: object
- deprecated: true
properties: *109
required: *110
title: Function message
type: object
x-oaiExpandable: true
ChatCompletionRequestMessageContentPart:
oneOf: *115
x-oaiExpandable: true
ChatCompletionRequestMessageContentPartImage:
properties: *118
required: *119
title: Image content part
type: object
ChatCompletionRequestMessageContentPartText:
properties: *116
required: *117
title: Text content part
type: object
ChatCompletionRequestSystemMessage:
properties: *112
required: *113
title: System message
type: object
ChatCompletionRequestToolMessage:
properties: *121
required: *122
title: Tool message
type: object
ChatCompletionRequestUserMessage:
properties: *114
required: *120
title: User message
type: object
ChatCompletionResponseMessage:
description: A chat completion message generated by the model.
properties: &123
content:
description: The contents of the message.
nullable: true
type: string
function_call:
deprecated: true
description: Deprecated and replaced by `tool_calls`. The name and arguments
of a function that should be called, as generated by the model.
properties:
arguments:
description: The arguments to call the function with, as generated by
the model in JSON format. Note that the model does not always generate
valid JSON, and may hallucinate parameters not defined by your function
schema. Validate the arguments in your code before calling your function.
type: string
name:
description: The name of the function to call.
type: string
required:
- name
- arguments
type: object
role:
description: The role of the author of this message.
enum:
- assistant
type: string
tool_calls:
description: The tool calls generated by the model, such as function calls.
items: *104
type: array
required: &124
- role
- content
type: object
ChatCompletionRole:
description: The role of the author of a message
enum:
- system
- user
- assistant
- tool
- function
type: string
ChatCompletionStreamOptions:
default: ~
description: |
Options for streaming response. Only set this when you set `stream: true`.
nullable: true
properties: &125
include_usage:
description: |
If set, an additional chunk will be streamed before the `data: [DONE]` message. The `usage` field on this chunk shows the token usage statistics for the entire request, and the `choices` field will always be an empty array. All other chunks will also include a `usage` field, but with a null value.
type: boolean
type: object
ChatCompletionStreamResponseDelta:
description: A chat completion delta generated by streamed model responses.
properties: &126
content:
description: The contents of the chunk message.
nullable: true
type: string
function_call:
deprecated: true
description: Deprecated and replaced by `tool_calls`. The name and arguments
of a function that should be called, as generated by the model.
properties:
arguments:
description: The arguments to call the function with, as generated by
the model in JSON format. Note that the model does not always generate
valid JSON, and may hallucinate parameters not defined by your function
schema. Validate the arguments in your code before calling your function.
type: string
name:
description: The name of the function to call.
type: string
type: object
role:
description: The role of the author of this message.
enum:
- system
- user
- assistant
- tool
type: string
tool_calls:
items:
properties: *102
required: *103
type: object
type: array
type: object
ChatCompletionTokenLogprob:
properties: &127
bytes: &128
description: A list of integers representing the UTF-8 bytes representation
of the token. Useful in instances where characters are represented by
multiple tokens and their byte representations must be combined to generate
the correct text representation. Can be `null` if there is no bytes representation
for the token.
items:
type: integer
nullable: true
type: array
logprob: &129
description: The log probability of this token, if it is within the top
20 most likely tokens. Otherwise, the value `-9999.0` is used to signify
that the token is very unlikely.
type: number
token: &130
description: The token.
type: string
top_logprobs:
description: List of the most likely tokens and their log probability, at
this token position. In rare cases, there may be fewer than the number
of requested `top_logprobs` returned.
items:
properties:
bytes: *128
logprob: *129
token: *130
required:
- token
- logprob
- bytes
type: object
type: array
required: &131
- token
- logprob
- bytes
- top_logprobs
type: object
ChatCompletionTool:
properties: &132
function:
properties: *9
required: *10
type: object
type:
description: The type of the tool. Currently, only `function` is supported.
enum:
- function
type: string
required: &133
- type
- function
type: object
ChatCompletionToolChoiceOption:
description: |
Controls which (if any) tool is called by the model.
`none` means the model will not call any tool and instead generates a message.
`auto` means the model can pick between generating a message or calling one or more tools.
`required` means the model must call one or more tools.
Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool.
`none` is the default when no tools are present. `auto` is the default if tools are present.
oneOf: &134
- description: |
`none` means the model will not call any tool and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `required` means the model must call one or more tools.
enum:
- none
- auto
- required
type: string
- description: Specifies a tool the model should use. Use to force the model
to call a specific function.
properties: *105
required: *106
type: object
x-oaiExpandable: true
CompletionUsage:
description: Usage statistics for the completion request.
properties: &135
completion_tokens:
description: Number of tokens in the generated completion.
type: integer
prompt_tokens:
description: Number of tokens in the prompt.
type: integer
total_tokens:
description: Total number of tokens used in the request (prompt + completion).
type: integer
required: &136
- prompt_tokens
- completion_tokens
- total_tokens
type: object
CreateAssistantRequest:
additionalProperties: false
properties: &137
description:
description: |
The description of the assistant. The maximum length is 512 characters.
maxLength: 512
nullable: true
type: string
instructions:
description: |
The system instructions that the assistant uses. The maximum length is 256,000 characters.
maxLength: 256000
nullable: true
type: string
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
model:
anyOf:
- type: string
- enum:
- gpt-4o
- gpt-4o-2024-05-13
- gpt-4-turbo
- gpt-4-turbo-2024-04-09
- gpt-4-0125-preview
- gpt-4-turbo-preview
- gpt-4-1106-preview
- gpt-4-vision-preview
- gpt-4
- gpt-4-0314
- gpt-4-0613
- gpt-4-32k
- gpt-4-32k-0314
- gpt-4-32k-0613
- gpt-3.5-turbo
- gpt-3.5-turbo-16k
- gpt-3.5-turbo-0613
- gpt-3.5-turbo-1106
- gpt-3.5-turbo-0125
- gpt-3.5-turbo-16k-0613
type: string
description: |
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
example: gpt-4-turbo
x-oaiTypeLabel: string
name:
description: |
The name of the assistant. The maximum length is 256 characters.
maxLength: 256
nullable: true
type: string
response_format:
description: |
Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
nullable: true
oneOf: *2
x-oaiExpandable: true
temperature:
default: 1
description: |
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
example: 1
maximum: 2
minimum: 0
nullable: true
type: number
tool_resources:
description: |
A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
nullable: true
properties:
code_interpreter:
properties:
file_ids:
default: []
description: |
A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool.
items:
type: string
maxItems: 20
type: array
type: object
file_search:
oneOf:
- required:
- vector_store_ids
- required:
- vector_stores
properties:
vector_store_ids:
description: |
The [vector store](/docs/api-reference/vector-stores/object) attached to this assistant. There can be a maximum of 1 vector store attached to the assistant.
items:
type: string
maxItems: 1
type: array
vector_stores:
description: |
A helper to create a [vector store](/docs/api-reference/vector-stores/object) with file_ids and attach it to this assistant. There can be a maximum of 1 vector store attached to the assistant.
items:
properties:
file_ids:
description: |
A list of [file](/docs/api-reference/files) IDs to add to the vector store. There can be a maximum of 10000 files in a vector store.
items:
type: string
maxItems: 10000
type: array
metadata:
description: |
Set of 16 key-value pairs that can be attached to a vector store. This can be useful for storing additional information about the vector store in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
type: object
x-oaiTypeLabel: map
type: object
maxItems: 1
type: array
type: object
type: object
tools:
default: []
description: |
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`.
items:
oneOf:
- properties: *4
required: *5
title: Code interpreter tool
type: object
- properties: *6
required: *7
title: FileSearch tool
type: object
- properties: *8
required: *11
title: Function tool
type: object
x-oaiExpandable: true
maxItems: 128
type: array
top_p:
default: 1
description: |
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
example: 1
maximum: 1
minimum: 0
nullable: true
type: number
required: &138
- model
type: object
CreateChatCompletionFunctionResponse:
description: Represents a chat completion response returned by model, based
on the provided input.
properties:
choices:
description: A list of chat completion choices. Can be more than one if
`n` is greater than 1.
items:
properties:
finish_reason:
description: |
The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, or `function_call` if the model called a function.
enum:
- stop
- length
- function_call
- content_filter
type: string
index:
description: The index of the choice in the list of choices.
type: integer
message:
description: A chat completion message generated by the model.
properties: *123
required: *124
type: object
required:
- finish_reason
- index
- message
- logprobs
type: object
type: array
created:
description: The Unix timestamp (in seconds) of when the chat completion
was created.
type: integer
id:
description: A unique identifier for the chat completion.
type: string
model:
description: The model used for the chat completion.
type: string
object:
description: The object type, which is always `chat.completion`.
enum:
- chat.completion
type: string
system_fingerprint:
description: |
This fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism.
type: string
usage:
description: Usage statistics for the completion request.
properties: *135
required: *136
type: object
required:
- choices
- created
- id
- model
- object
type: object
CreateChatCompletionImageResponse:
description: Represents a streamed chunk of a chat completion response returned
by model, based on the provided input.
type: object
CreateChatCompletionRequest:
properties: &139
frequency_penalty:
default: 0
description: |
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
[See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)
maximum: 2
minimum: -2
nullable: true
type: number
function_call:
deprecated: true
description: |
Deprecated in favor of `tool_choice`.
Controls which (if any) function is called by the model.
`none` means the model will not call a function and instead generates a message.
`auto` means the model can pick between generating a message or calling a function.
Specifying a particular function via `{"name": "my_function"}` forces the model to call that function.
`none` is the default when no functions are present. `auto` is the default if functions are present.
oneOf:
- description: |
`none` means the model will not call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function.
enum:
- none
- auto
type: string
- description: |
Specifying a particular function via `{"name": "my_function"}` forces the model to call that function.
properties: *96
required: *97
type: object
x-oaiExpandable: true
functions:
deprecated: true
description: |
Deprecated in favor of `tools`.
A list of functions the model may generate JSON inputs for.
items:
deprecated: true
properties: *98
required: *99
type: object
maxItems: 128
minItems: 1
type: array
logit_bias:
additionalProperties:
type: integer
default: ~
description: |
Modify the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
nullable: true
type: object
x-oaiTypeLabel: map
logprobs:
default: false
description: Whether to return log probabilities of the output tokens or
not. If true, returns the log probabilities of each output token returned
in the `content` of `message`.
nullable: true
type: boolean
max_tokens:
description: |
The maximum number of [tokens](/tokenizer) that can be generated in the chat completion.
The total length of input tokens and generated tokens is limited by the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
nullable: true
type: integer
messages:
description: A list of messages comprising the conversation so far. [Example
Python code](https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models).
items:
oneOf: *111
x-oaiExpandable: true
minItems: 1
type: array
model:
anyOf:
- type: string
- enum:
- gpt-4o
- gpt-4o-2024-05-13
- gpt-4-turbo
- gpt-4-turbo-2024-04-09
- gpt-4-0125-preview
- gpt-4-turbo-preview
- gpt-4-1106-preview
- gpt-4-vision-preview
- gpt-4
- gpt-4-0314
- gpt-4-0613
- gpt-4-32k
- gpt-4-32k-0314
- gpt-4-32k-0613
- gpt-3.5-turbo
- gpt-3.5-turbo-16k
- gpt-3.5-turbo-0301
- gpt-3.5-turbo-0613
- gpt-3.5-turbo-1106
- gpt-3.5-turbo-0125
- gpt-3.5-turbo-16k-0613
type: string
description: ID of the model to use. See the [model endpoint compatibility](/docs/models/model-endpoint-compatibility)
table for details on which models work with the Chat API.
example: gpt-4-turbo
x-oaiTypeLabel: string
n:
default: 1
description: How many chat completion choices to generate for each input
message. Note that you will be charged based on the number of generated
tokens across all of the choices. Keep `n` as `1` to minimize costs.
example: 1
maximum: 128
minimum: 1
nullable: true
type: integer
presence_penalty:
default: 0
description: |
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
[See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)
maximum: 2
minimum: -2
nullable: true
type: number
response_format:
description: |
An object specifying the format that the model must output. Compatible with [GPT-4 Turbo](/docs/models/gpt-4-and-gpt-4-turbo) and all GPT-3.5 Turbo models newer than `gpt-3.5-turbo-1106`.
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
properties:
type:
default: text
description: Must be one of `text` or `json_object`.
enum:
- text
- json_object
example: json_object
type: string
type: object
seed:
description: |
This feature is in Beta.
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend.
maximum: 9223372036854775807
minimum: -9223372036854775808
nullable: true
type: integer
stop:
default: ~
description: |
Up to 4 sequences where the API will stop generating further tokens.
oneOf:
- nullable: true
type: string
- items:
type: string
maxItems: 4
minItems: 1
type: array
stream:
default: false
description: |
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
nullable: true
type: boolean
stream_options:
default: ~
description: |
Options for streaming response. Only set this when you set `stream: true`.
nullable: true
properties: *125
type: object
temperature:
default: 1
description: |
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or `top_p` but not both.
example: 1
maximum: 2
minimum: 0
nullable: true
type: number
tool_choice:
description: |
Controls which (if any) tool is called by the model.
`none` means the model will not call any tool and instead generates a message.
`auto` means the model can pick between generating a message or calling one or more tools.
`required` means the model must call one or more tools.
Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool.
`none` is the default when no tools are present. `auto` is the default if tools are present.
oneOf: *134
x-oaiExpandable: true
tools:
description: |
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
items:
properties: *132
required: *133
type: object
type: array
top_logprobs:
description: An integer between 0 and 20 specifying the number of most likely
tokens to return at each token position, each with an associated log probability.
`logprobs` must be set to `true` if this parameter is used.
maximum: 20
minimum: 0
nullable: true
type: integer
top_p:
default: 1
description: |
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or `temperature` but not both.
example: 1
maximum: 1
minimum: 0
nullable: true
type: number
user: &140
description: |
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
example: user-1234
type: string
required: &141
- model
- messages
type: object
CreateChatCompletionResponse:
description: Represents a chat completion response returned by model, based
on the provided input.
properties: &142
choices:
description: A list of chat completion choices. Can be more than one if
`n` is greater than 1.
items:
properties:
finish_reason:
description: |
The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence,
`length` if the maximum number of tokens specified in the request was reached,
`content_filter` if content was omitted due to a flag from our content filters,
`tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function.
enum:
- stop
- length
- tool_calls
- content_filter
- function_call
type: string
index:
description: The index of the choice in the list of choices.
type: integer
logprobs: &143
description: Log probability information for the choice.
nullable: true
properties:
content:
description: A list of message content tokens with log probability
information.
items:
properties: *127
required: *131
type: object
nullable: true
type: array
required:
- content
type: object
message:
description: A chat completion message generated by the model.
properties: *123
required: *124
type: object
required:
- finish_reason
- index
- message
- logprobs
type: object
type: array
created:
description: The Unix timestamp (in seconds) of when the chat completion
was created.
type: integer
id:
description: A unique identifier for the chat completion.
type: string
model:
description: The model used for the chat completion.
type: string
object:
description: The object type, which is always `chat.completion`.
enum:
- chat.completion
type: string
system_fingerprint:
description: |
This fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism.
type: string
usage:
description: Usage statistics for the completion request.
properties: *135
required: *136
type: object
required: &144
- choices
- created
- id
- model
- object
type: object
CreateChatCompletionStreamResponse:
description: Represents a streamed chunk of a chat completion response returned
by model, based on the provided input.
properties:
choices:
description: |
A list of chat completion choices. Can contain more than one elements if `n` is greater than 1. Can also be empty for the
last chunk if you set `stream_options: {"include_usage": true}`.
items:
properties:
delta:
description: A chat completion delta generated by streamed model responses.
properties: *126
type: object
finish_reason:
description: |
The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence,
`length` if the maximum number of tokens specified in the request was reached,
`content_filter` if content was omitted due to a flag from our content filters,
`tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function.
enum:
- stop
- length
- tool_calls
- content_filter
- function_call
nullable: true
type: string
index:
description: The index of the choice in the list of choices.
type: integer
logprobs: *143
required:
- delta
- finish_reason
- index
type: object
type: array
created:
description: The Unix timestamp (in seconds) of when the chat completion
was created. Each chunk has the same timestamp.
type: integer
id:
description: A unique identifier for the chat completion. Each chunk has
the same ID.
type: string
model:
description: The model to generate the completion.
type: string
object:
description: The object type, which is always `chat.completion.chunk`.
enum:
- chat.completion.chunk
type: string
system_fingerprint:
description: |
This fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism.
type: string
usage:
description: |
An optional field that will only be present when you set `stream_options: {"include_usage": true}` in your request.
When present, it contains a null value except for the last chunk which contains the token usage statistics for the entire request.
properties:
completion_tokens:
description: Number of tokens in the generated completion.
type: integer
prompt_tokens:
description: Number of tokens in the prompt.
type: integer
total_tokens:
description: Total number of tokens used in the request (prompt + completion).
type: integer
required:
- prompt_tokens
- completion_tokens
- total_tokens
type: object
required:
- choices
- created
- id
- model
- object
type: object
CreateCompletionRequest:
properties: &145
best_of:
default: 1
description: |
Generates `best_of` completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed.
When used with `n`, `best_of` controls the number of candidate completions and `n` specifies how many to return – `best_of` must be greater than `n`.
**Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`.
maximum: 20
minimum: 0
nullable: true
type: integer
echo:
default: false
description: |
Echo back the prompt in addition to the completion
nullable: true
type: boolean
frequency_penalty:
default: 0
description: |
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
[See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)
maximum: 2
minimum: -2
nullable: true
type: number
logit_bias:
additionalProperties:
type: integer
default: ~
description: |
Modify the likelihood of specified tokens appearing in the completion.
Accepts a JSON object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this [tokenizer tool](/tokenizer?view=bpe) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
As an example, you can pass `{"50256": -100}` to prevent the <|endoftext|> token from being generated.
nullable: true
type: object
x-oaiTypeLabel: map
logprobs:
default: ~
description: |
Include the log probabilities on the `logprobs` most likely output tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response.
The maximum value for `logprobs` is 5.
maximum: 5
minimum: 0
nullable: true
type: integer
max_tokens:
default: 16
description: |
The maximum number of [tokens](/tokenizer) that can be generated in the completion.
The token count of your prompt plus `max_tokens` cannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
example: 16
minimum: 0
nullable: true
type: integer
model:
anyOf:
- type: string
- enum:
- gpt-3.5-turbo-instruct
- davinci-002
- babbage-002
type: string
description: |
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
x-oaiTypeLabel: string
n:
default: 1
description: |
How many completions to generate for each prompt.
**Note:** Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for `max_tokens` and `stop`.
example: 1
maximum: 128
minimum: 1
nullable: true
type: integer
presence_penalty:
default: 0
description: |
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
[See more information about frequency and presence penalties.](/docs/guides/text-generation/parameter-details)
maximum: 2
minimum: -2
nullable: true
type: number
prompt:
default: <|endoftext|>
description: |
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document.
nullable: true
oneOf:
- default: ''
example: This is a test.
type: string
- items:
default: ''
example: This is a test.
type: string
type: array
- example: '[1212, 318, 257, 1332, 13]'
items:
type: integer
minItems: 1
type: array
- example: '[[1212, 318, 257, 1332, 13]]'
items:
items:
type: integer
minItems: 1
type: array
minItems: 1
type: array
seed:
description: |
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same `seed` and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the `system_fingerprint` response parameter to monitor changes in the backend.
maximum: 9223372036854775807
minimum: -9223372036854775808
nullable: true
type: integer
stop:
default: ~
description: |
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
nullable: true
oneOf:
- default: <|endoftext|>
example: "\n"
nullable: true
type: string
- items:
example: '["\n"]'
type: string
maxItems: 4
minItems: 1
type: array
stream:
default: false
description: |
Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
nullable: true
type: boolean
stream_options:
default: ~
description: |
Options for streaming response. Only set this when you set `stream: true`.
nullable: true
properties: *125
type: object
suffix:
default: ~
description: |
The suffix that comes after a completion of inserted text.
This parameter is only supported for `gpt-3.5-turbo-instruct`.
example: test.
nullable: true
type: string
temperature:
default: 1
description: |
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or `top_p` but not both.
example: 1
maximum: 2
minimum: 0
nullable: true
type: number
top_p:
default: 1
description: |
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or `temperature` but not both.
example: 1
maximum: 1
minimum: 0
nullable: true
type: number
user: *140
required: &146
- model
- prompt
type: object
CreateCompletionResponse:
description: |
Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).
properties: &147
choices:
description: The list of completion choices the model generated for the
input prompt.
items:
properties:
finish_reason:
description: |
The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence,
`length` if the maximum number of tokens specified in the request was reached,
or `content_filter` if content was omitted due to a flag from our content filters.
enum:
- stop
- length
- content_filter
type: string
index:
type: integer
logprobs:
nullable: true
properties:
text_offset:
items:
type: integer
type: array
token_logprobs:
items:
type: number
type: array
tokens:
items:
type: string
type: array
top_logprobs:
items:
additionalProperties:
type: number
type: object
type: array
type: object
text:
type: string
required:
- finish_reason
- index
- logprobs
- text
type: object
type: array
created:
description: The Unix timestamp (in seconds) of when the completion was
created.
type: integer
id:
description: A unique identifier for the completion.
type: string
model:
description: The model used for completion.
type: string
object:
description: The object type, which is always "text_completion"
enum:
- text_completion
type: string
system_fingerprint:
description: |
This fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the `seed` request parameter to understand when backend changes have been made that might impact determinism.
type: string
usage:
description: Usage statistics for the completion request.
properties: *135
required: *136
type: object
required: &148
- id
- object
- created
- model
- choices
type: object
CreateEmbeddingRequest:
additionalProperties: false
properties: &149
dimensions:
description: |
The number of dimensions the resulting output embeddings should have. Only supported in `text-embedding-3` and later models.
minimum: 1
type: integer
encoding_format:
default: float
description: The format to return the embeddings in. Can be either `float`
or [`base64`](https://pypi.org/project/pybase64/).
enum:
- float
- base64
example: float
type: string
input:
description: |
Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for `text-embedding-ada-002`), cannot be an empty string, and any array must be 2048 dimensions or less. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
example: The quick brown fox jumped over the lazy dog
oneOf:
- default: ''
description: The string that will be turned into an embedding.
example: This is a test.
title: string
type: string
- description: The array of strings that will be turned into an embedding.
items:
default: ''
example: '[''This is a test.'']'
type: string
maxItems: 2048
minItems: 1
title: array
type: array
- description: The array of integers that will be turned into an embedding.
example: '[1212, 318, 257, 1332, 13]'
items:
type: integer
maxItems: 2048
minItems: 1
title: array
type: array
- description: The array of arrays containing integers that will be turned
into an embedding.
example: '[[1212, 318, 257, 1332, 13]]'
items:
items:
type: integer
minItems: 1
type: array
maxItems: 2048
minItems: 1
title: array
type: array
x-oaiExpandable: true
model:
anyOf:
- type: string
- enum:
- text-embedding-ada-002
- text-embedding-3-small
- text-embedding-3-large
type: string
description: |
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
example: text-embedding-3-small
x-oaiTypeLabel: string
user: *140
required: &150
- model
- input
type: object
CreateEmbeddingResponse:
properties: &151
data:
description: The list of embeddings generated by the model.
items:
description: |
Represents an embedding vector returned by embedding endpoint.
properties: &152
embedding:
description: |
The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the [embedding guide](/docs/guides/embeddings).
items:
type: number
type: array
index:
description: The index of the embedding in the list of embeddings.
type: integer
object:
description: The object type, which is always "embedding".
enum:
- embedding
type: string
required: &153
- index
- object
- embedding
type: object
type: array
model:
description: The name of the model used to generate the embedding.
type: string
object:
description: The object type, which is always "list".
enum:
- list
type: string
usage:
description: The usage information for the request.
properties:
prompt_tokens:
description: The number of tokens used by the prompt.
type: integer
total_tokens:
description: The total number of tokens used by the request.
type: integer
required:
- prompt_tokens
- total_tokens
type: object
required: &154
- object
- model
- data
- usage
type: object
CreateFileRequest:
additionalProperties: false
properties: &155
file:
description: |
The File object (not file name) to be uploaded.
format: binary
type: string
purpose:
description: |
The intended purpose of the uploaded file.
Use "assistants" for [Assistants](/docs/api-reference/assistants) and [Message](/docs/api-reference/messages) files, "vision" for Assistants image file inputs, "batch" for [Batch API](/docs/guides/batch), and "fine-tune" for [Fine-tuning](/docs/api-reference/fine-tuning).
enum:
- assistants
- batch
- fine-tune
- vision
type: string
required: &156
- file
- purpose
type: object
CreateFineTuningJobRequest:
properties: &157
hyperparameters:
description: The hyperparameters used for the fine-tuning job.
properties:
batch_size:
default: auto
description: |
Number of examples in each batch. A larger batch size means that model parameters
are updated less frequently, but with lower variance.
oneOf:
- enum:
- auto
type: string
- maximum: 256
minimum: 1
type: integer
learning_rate_multiplier:
default: auto
description: |
Scaling factor for the learning rate. A smaller learning rate may be useful to avoid
overfitting.
oneOf:
- enum:
- auto
type: string
- exclusiveMinimum: true
minimum: 0
type: number
n_epochs:
default: auto
description: |
The number of epochs to train the model for. An epoch refers to one full cycle
through the training dataset.
oneOf:
- enum:
- auto
type: string
- maximum: 50
minimum: 1
type: integer
type: object
integrations:
description: A list of integrations to enable for your fine-tuning job.
items:
properties:
type:
description: |
The type of integration to enable. Currently, only "wandb" (Weights and Biases) is supported.
oneOf:
- enum:
- wandb
type: string
wandb:
description: |
The settings for your integration with Weights and Biases. This payload specifies the project that
metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags
to your run, and set a default entity (team, username, etc) to be associated with your run.
properties:
entity:
description: |
The entity to use for the run. This allows you to set the team or username of the WandB user that you would
like associated with the run. If not set, the default entity for the registered WandB API key is used.
nullable: true
type: string
name:
description: |
A display name to set for the run. If not set, we will use the Job ID as the name.
nullable: true
type: string
project:
description: |
The name of the project that the new run will be created under.
example: my-wandb-project
type: string
tags:
description: |
A list of tags to be attached to the newly created run. These tags are passed through directly to WandB. Some
default tags are generated by OpenAI: "openai/finetune", "openai/{base-model}", "openai/{ftjob-abcdef}".
items:
example: custom-tag
type: string
type: array
required:
- project
type: object
required:
- type
- wandb
type: object
nullable: true
type: array
model:
anyOf:
- type: string
- enum:
- babbage-002
- davinci-002
- gpt-3.5-turbo
type: string
description: |
The name of the model to fine-tune. You can select one of the
[supported models](/docs/guides/fine-tuning/what-models-can-be-fine-tuned).
example: gpt-3.5-turbo
x-oaiTypeLabel: string
seed:
description: |
The seed controls the reproducibility of the job. Passing in the same seed and job parameters should produce the same results, but may differ in rare cases.
If a seed is not specified, one will be generated for you.
example: 42
maximum: 2147483647
minimum: 0
nullable: true
type: integer
suffix:
default: ~
description: |
A string of up to 18 characters that will be added to your fine-tuned model name.
For example, a `suffix` of "custom-model-name" would produce a model name like `ft:gpt-3.5-turbo:openai:custom-model-name:7p4lURel`.
maxLength: 40
minLength: 1
nullable: true
type: string
training_file:
description: |
The ID of an uploaded file that contains training data.
See [upload file](/docs/api-reference/files/create) for how to upload a file.
Your dataset must be formatted as a JSONL file. Additionally, you must upload your file with the purpose `fine-tune`.
See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.
example: file-abc123
type: string
validation_file:
description: |
The ID of an uploaded file that contains validation data.
If you provide this file, the data is used to generate validation
metrics periodically during fine-tuning. These metrics can be viewed in
the fine-tuning results file.
The same data should not be present in both train and validation files.
Your dataset must be formatted as a JSONL file. You must upload your file with the purpose `fine-tune`.
See the [fine-tuning guide](/docs/guides/fine-tuning) for more details.
example: file-abc123
nullable: true
type: string
required: &158
- model
- training_file
type: object
CreateImageEditRequest:
properties: &159
image:
description: The image to edit. Must be a valid PNG file, less than 4MB,
and square. If mask is not provided, image must have transparency, which
will be used as the mask.
format: binary
type: string
mask:
description: An additional image whose fully transparent areas (e.g. where
alpha is zero) indicate where `image` should be edited. Must be a valid
PNG file, less than 4MB, and have the same dimensions as `image`.
format: binary
type: string
model:
anyOf:
- type: string
- enum:
- dall-e-2
type: string
default: dall-e-2
description: The model to use for image generation. Only `dall-e-2` is supported
at this time.
example: dall-e-2
nullable: true
x-oaiTypeLabel: string
n:
default: 1
description: The number of images to generate. Must be between 1 and 10.
example: 1
maximum: 10
minimum: 1
nullable: true
type: integer
prompt:
description: A text description of the desired image(s). The maximum length
is 1000 characters.
example: A cute baby sea otter wearing a beret
type: string
response_format: &160
default: url
description: The format in which the generated images are returned. Must
be one of `url` or `b64_json`. URLs are only valid for 60 minutes after
the image has been generated.
enum:
- url
- b64_json
example: url
nullable: true
type: string
size: &161
default: 1024x1024
description: The size of the generated images. Must be one of `256x256`,
`512x512`, or `1024x1024`.
enum:
- 256x256
- 512x512
- 1024x1024
example: 1024x1024
nullable: true
type: string
user: *140
required: &162
- prompt
- image
type: object
CreateImageRequest:
properties: &163
model:
anyOf:
- type: string
- enum:
- dall-e-2
- dall-e-3
type: string
default: dall-e-2
description: The model to use for image generation.
example: dall-e-3
nullable: true
x-oaiTypeLabel: string
n: &164
default: 1
description: The number of images to generate. Must be between 1 and 10.
For `dall-e-3`, only `n=1` is supported.
example: 1
maximum: 10
minimum: 1
nullable: true
type: integer
prompt:
description: A text description of the desired image(s). The maximum length
is 1000 characters for `dall-e-2` and 4000 characters for `dall-e-3`.
example: A cute baby sea otter
type: string
quality:
default: standard
description: The quality of the image that will be generated. `hd` creates
images with finer details and greater consistency across the image. This
param is only supported for `dall-e-3`.
enum:
- standard
- hd
example: standard
type: string
response_format: *160
size:
default: 1024x1024
description: The size of the generated images. Must be one of `256x256`,
`512x512`, or `1024x1024` for `dall-e-2`. Must be one of `1024x1024`,
`1792x1024`, or `1024x1792` for `dall-e-3` models.
enum:
- 256x256
- 512x512
- 1024x1024
- 1792x1024
- 1024x1792
example: 1024x1024
nullable: true
type: string
style:
default: vivid
description: The style of the generated images. Must be one of `vivid` or
`natural`. Vivid causes the model to lean towards generating hyper-real
and dramatic images. Natural causes the model to produce more natural,
less hyper-real looking images. This param is only supported for `dall-e-3`.
enum:
- vivid
- natural
example: vivid
nullable: true
type: string
user: *140
required: &165
- prompt
type: object
CreateImageVariationRequest:
properties: &166
image:
description: The image to use as the basis for the variation(s). Must be
a valid PNG file, less than 4MB, and square.
format: binary
type: string
model:
anyOf:
- type: string
- enum:
- dall-e-2
type: string
default: dall-e-2
description: The model to use for image generation. Only `dall-e-2` is supported
at this time.
example: dall-e-2
nullable: true
x-oaiTypeLabel: string
n: *164
response_format: *160
size: *161
user: *140
required: &167
- image
type: object
CreateMessageRequest:
additionalProperties: false
properties: &168
attachments:
description: A list of files attached to the message, and the tools they
should be added to.
items:
properties:
file_id:
description: The ID of the file to attach to the message.
type: string
tools:
description: The tools to add this file to.
items:
oneOf:
- properties: *4
required: *5
title: Code interpreter tool
type: object
- properties: *6
required: *7
title: FileSearch tool
type: object
x-oaiExpandable: true
type: array
type: object
nullable: true
required:
- file_id
- tools
type: array
content:
oneOf:
- description: The text contents of the message.
title: Text content
type: string
- description: An array of content parts with a defined type, each can be
of type `text` or images can be passed with `image_url` or `image_file`.
Image types are only supported on [Vision-compatible models](/docs/models/overview).
items:
oneOf:
- description: References an image [File](/docs/api-reference/files)
in the content of a message.
properties: *65
required: *66
title: Image file
type: object
- description: References an image URL in the content of a message.
properties: *67
required: *68
title: Image URL
type: object
- description: The text content that is part of a message.
properties: &169
text:
description: Text content to be sent to the model
type: string
type:
description: Always `text`.
enum:
- text
type: string
required: &170
- type
- text
title: Text
type: object
x-oaiExpandable: true
minItems: 1
title: Array of content parts
type: array
x-oaiExpandable: true
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
role:
description: |
The role of the entity that is creating the message. Allowed values include:
- `user`: Indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages.
- `assistant`: Indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation.
enum:
- user
- assistant
type: string
required: &171
- role
- content
type: object
CreateModerationRequest:
properties: &172
input:
description: The input text to classify
oneOf:
- default: ''
example: I want to kill them.
type: string
- items:
default: ''
example: I want to kill them.
type: string
type: array
model:
anyOf:
- type: string
- enum:
- text-moderation-latest
- text-moderation-stable
type: string
default: text-moderation-latest
description: |
Two content moderations models are available: `text-moderation-stable` and `text-moderation-latest`.
The default is `text-moderation-latest` which will be automatically upgraded over time. This ensures you are always using our most accurate model. If you use `text-moderation-stable`, we will provide advanced notice before updating the model. Accuracy of `text-moderation-stable` may be slightly lower than for `text-moderation-latest`.
example: text-moderation-stable
nullable: false
x-oaiTypeLabel: string
required: &173
- input
type: object
CreateModerationResponse:
description: Represents if a given text input is potentially harmful.
properties: &174
id:
description: The unique identifier for the moderation request.
type: string
model:
description: The model used to generate the moderation results.
type: string
results:
description: A list of moderation objects.
items:
properties:
categories:
description: A list of the categories, and whether they are flagged
or not.
properties:
harassment:
description: Content that expresses, incites, or promotes harassing
language towards any target.
type: boolean
harassment/threatening:
description: Harassment content that also includes violence or
serious harm towards any target.
type: boolean
hate:
description: Content that expresses, incites, or promotes hate
based on race, gender, ethnicity, religion, nationality, sexual
orientation, disability status, or caste. Hateful content aimed
at non-protected groups (e.g., chess players) is harassment.
type: boolean
hate/threatening:
description: Hateful content that also includes violence or serious
harm towards the targeted group based on race, gender, ethnicity,
religion, nationality, sexual orientation, disability status,
or caste.
type: boolean
self-harm:
description: Content that promotes, encourages, or depicts acts
of self-harm, such as suicide, cutting, and eating disorders.
type: boolean
self-harm/instructions:
description: Content that encourages performing acts of self-harm,
such as suicide, cutting, and eating disorders, or that gives
instructions or advice on how to commit such acts.
type: boolean
self-harm/intent:
description: Content where the speaker expresses that they are
engaging or intend to engage in acts of self-harm, such as suicide,
cutting, and eating disorders.
type: boolean
sexual:
description: Content meant to arouse sexual excitement, such as
the description of sexual activity, or that promotes sexual
services (excluding sex education and wellness).
type: boolean
sexual/minors:
description: Sexual content that includes an individual who is
under 18 years old.
type: boolean
violence:
description: Content that depicts death, violence, or physical
injury.
type: boolean
violence/graphic:
description: Content that depicts death, violence, or physical
injury in graphic detail.
type: boolean
required:
- hate
- hate/threatening
- harassment
- harassment/threatening
- self-harm
- self-harm/intent
- self-harm/instructions
- sexual
- sexual/minors
- violence
- violence/graphic
type: object
category_scores:
description: A list of the categories along with their scores as predicted
by model.
properties:
harassment:
description: The score for the category 'harassment'.
type: number
harassment/threatening:
description: The score for the category 'harassment/threatening'.
type: number
hate:
description: The score for the category 'hate'.
type: number
hate/threatening:
description: The score for the category 'hate/threatening'.
type: number
self-harm:
description: The score for the category 'self-harm'.
type: number
self-harm/instructions:
description: The score for the category 'self-harm/instructions'.
type: number
self-harm/intent:
description: The score for the category 'self-harm/intent'.
type: number
sexual:
description: The score for the category 'sexual'.
type: number
sexual/minors:
description: The score for the category 'sexual/minors'.
type: number
violence:
description: The score for the category 'violence'.
type: number
violence/graphic:
description: The score for the category 'violence/graphic'.
type: number
required:
- hate
- hate/threatening
- harassment
- harassment/threatening
- self-harm
- self-harm/intent
- self-harm/instructions
- sexual
- sexual/minors
- violence
- violence/graphic
type: object
flagged:
description: Whether any of the below categories are flagged.
type: boolean
required:
- flagged
- categories
- category_scores
type: object
type: array
required: &175
- id
- model
- results
type: object
CreateRunRequest:
additionalProperties: false
properties: &176
additional_instructions:
description: Appends additional instructions at the end of the instructions
for the run. This is useful for modifying the behavior on a per-run basis
without overriding other instructions.
nullable: true
type: string
additional_messages:
description: Adds additional messages to the thread before creating the
run.
items:
additionalProperties: false
properties: *168
required: *171
type: object
nullable: true
type: array
assistant_id:
description: The ID of the [assistant](/docs/api-reference/assistants) to
use to execute this run.
type: string
instructions:
description: Overrides the [instructions](/docs/api-reference/assistants/createAssistant)
of the assistant. This is useful for modifying the behavior on a per-run
basis.
nullable: true
type: string
max_completion_tokens:
description: |
The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status `incomplete`. See `incomplete_details` for more info.
minimum: 256
nullable: true
type: integer
max_prompt_tokens:
description: |
The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status `incomplete`. See `incomplete_details` for more info.
minimum: 256
nullable: true
type: integer
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
model:
anyOf:
- type: string
- enum:
- gpt-4o
- gpt-4o-2024-05-13
- gpt-4-turbo
- gpt-4-turbo-2024-04-09
- gpt-4-0125-preview
- gpt-4-turbo-preview
- gpt-4-1106-preview
- gpt-4-vision-preview
- gpt-4
- gpt-4-0314
- gpt-4-0613
- gpt-4-32k
- gpt-4-32k-0314
- gpt-4-32k-0613
- gpt-3.5-turbo
- gpt-3.5-turbo-16k
- gpt-3.5-turbo-0613
- gpt-3.5-turbo-1106
- gpt-3.5-turbo-0125
- gpt-3.5-turbo-16k-0613
type: string
description: The ID of the [Model](/docs/api-reference/models) to be used
to execute this run. If a value is provided here, it will override the
model associated with the assistant. If not, the model associated with
the assistant will be used.
example: gpt-4-turbo
nullable: true
x-oaiTypeLabel: string
response_format:
description: |
Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
nullable: true
oneOf: *2
x-oaiExpandable: true
stream:
description: |
If `true`, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a `data: [DONE]` message.
nullable: true
type: boolean
temperature:
default: 1
description: |
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
example: 1
maximum: 2
minimum: 0
nullable: true
type: number
tool_choice:
description: |
Controls which (if any) tool is called by the model.
`none` means the model will not call any tools and instead generates a message.
`auto` is the default value and means the model can pick between generating a message or calling one or more tools.
`required` means the model must call one or more tools before responding to the user.
Specifying a particular tool like `{"type": "file_search"}` or `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool.
nullable: true
oneOf: *20
x-oaiExpandable: true
tools:
description: Override the tools the assistant can use for this run. This
is useful for modifying the behavior on a per-run basis.
items:
oneOf:
- properties: *4
required: *5
title: Code interpreter tool
type: object
- properties: *6
required: *7
title: FileSearch tool
type: object
- properties: *8
required: *11
title: Function tool
type: object
x-oaiExpandable: true
maxItems: 20
nullable: true
type: array
top_p:
default: 1
description: |
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
example: 1
maximum: 1
minimum: 0
nullable: true
type: number
truncation_strategy:
description: Controls for how a thread will be truncated prior to the run.
Use this to control the intial context window of the run.
nullable: true
properties: *23
required: *24
title: Thread Truncation Controls
type: object
required: &177
- thread_id
- assistant_id
type: object
CreateSpeechRequest:
additionalProperties: false
properties: &178
input:
description: The text to generate audio for. The maximum length is 4096
characters.
maxLength: 4096
type: string
model:
anyOf:
- type: string
- enum:
- tts-1
- tts-1-hd
type: string
description: |
One of the available [TTS models](/docs/models/tts): `tts-1` or `tts-1-hd`
x-oaiTypeLabel: string
response_format:
default: mp3
description: The format to audio in. Supported formats are `mp3`, `opus`,
`aac`, `flac`, `wav`, and `pcm`.
enum:
- mp3
- opus
- aac
- flac
- wav
- pcm
type: string
speed:
default: 1.0
description: The speed of the generated audio. Select a value from `0.25`
to `4.0`. `1.0` is the default.
maximum: 4.0
minimum: 0.25
type: number
voice:
description: The voice to use when generating the audio. Supported voices
are `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`. Previews
of the voices are available in the [Text to speech guide](/docs/guides/text-to-speech/voice-options).
enum:
- alloy
- echo
- fable
- onyx
- nova
- shimmer
type: string
required: &179
- model
- input
- voice
type: object
CreateThreadAndRunRequest:
additionalProperties: false
properties: &180
assistant_id:
description: The ID of the [assistant](/docs/api-reference/assistants) to
use to execute this run.
type: string
instructions:
description: Override the default system message of the assistant. This
is useful for modifying the behavior on a per-run basis.
nullable: true
type: string
max_completion_tokens:
description: |
The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status `incomplete`. See `incomplete_details` for more info.
minimum: 256
nullable: true
type: integer
max_prompt_tokens:
description: |
The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status `incomplete`. See `incomplete_details` for more info.
minimum: 256
nullable: true
type: integer
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
model:
anyOf:
- type: string
- enum:
- gpt-4o
- gpt-4o-2024-05-13
- gpt-4-turbo
- gpt-4-turbo-2024-04-09
- gpt-4-0125-preview
- gpt-4-turbo-preview
- gpt-4-1106-preview
- gpt-4-vision-preview
- gpt-4
- gpt-4-0314
- gpt-4-0613
- gpt-4-32k
- gpt-4-32k-0314
- gpt-4-32k-0613
- gpt-3.5-turbo
- gpt-3.5-turbo-16k
- gpt-3.5-turbo-0613
- gpt-3.5-turbo-1106
- gpt-3.5-turbo-0125
- gpt-3.5-turbo-16k-0613
type: string
description: The ID of the [Model](/docs/api-reference/models) to be used
to execute this run. If a value is provided here, it will override the
model associated with the assistant. If not, the model associated with
the assistant will be used.
example: gpt-4-turbo
nullable: true
x-oaiTypeLabel: string
response_format:
description: |
Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
nullable: true
oneOf: *2
x-oaiExpandable: true
stream:
description: |
If `true`, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a `data: [DONE]` message.
nullable: true
type: boolean
temperature:
default: 1
description: |
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
example: 1
maximum: 2
minimum: 0
nullable: true
type: number
thread:
additionalProperties: false
description: If no thread is provided, an empty thread will be created.
properties: &181
messages:
description: A list of [messages](/docs/api-reference/messages) to start
the thread with.
items:
additionalProperties: false
properties: *168
required: *171
type: object
type: array
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
tool_resources:
description: |
A set of resources that are made available to the assistant's tools in this thread. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
nullable: true
properties:
code_interpreter:
properties:
file_ids:
default: []
description: |
A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool.
items:
type: string
maxItems: 20
type: array
type: object
file_search:
oneOf:
- required:
- vector_store_ids
- required:
- vector_stores
properties:
vector_store_ids:
description: |
The [vector store](/docs/api-reference/vector-stores/object) attached to this thread. There can be a maximum of 1 vector store attached to the thread.
items:
type: string
maxItems: 1
type: array
vector_stores:
description: |
A helper to create a [vector store](/docs/api-reference/vector-stores/object) with file_ids and attach it to this thread. There can be a maximum of 1 vector store attached to the thread.
items:
properties:
file_ids:
description: |
A list of [file](/docs/api-reference/files) IDs to add to the vector store. There can be a maximum of 10000 files in a vector store.
items:
type: string
maxItems: 10000
type: array
metadata:
description: |
Set of 16 key-value pairs that can be attached to a vector store. This can be useful for storing additional information about the vector store in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
type: object
x-oaiTypeLabel: map
type: object
maxItems: 1
type: array
type: object
type: object
type: object
tool_choice:
description: |
Controls which (if any) tool is called by the model.
`none` means the model will not call any tools and instead generates a message.
`auto` is the default value and means the model can pick between generating a message or calling one or more tools.
`required` means the model must call one or more tools before responding to the user.
Specifying a particular tool like `{"type": "file_search"}` or `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool.
nullable: true
oneOf: *20
x-oaiExpandable: true
tool_resources:
description: |
A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
nullable: true
properties:
code_interpreter:
properties:
file_ids:
default: []
description: |
A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool.
items:
type: string
maxItems: 20
type: array
type: object
file_search:
properties:
vector_store_ids:
description: |
The ID of the [vector store](/docs/api-reference/vector-stores/object) attached to this assistant. There can be a maximum of 1 vector store attached to the assistant.
items:
type: string
maxItems: 1
type: array
type: object
type: object
tools:
description: Override the tools the assistant can use for this run. This
is useful for modifying the behavior on a per-run basis.
items:
oneOf:
- properties: *4
required: *5
title: Code interpreter tool
type: object
- properties: *6
required: *7
title: FileSearch tool
type: object
- properties: *8
required: *11
title: Function tool
type: object
maxItems: 20
nullable: true
type: array
top_p:
default: 1
description: |
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
example: 1
maximum: 1
minimum: 0
nullable: true
type: number
truncation_strategy:
description: Controls for how a thread will be truncated prior to the run.
Use this to control the intial context window of the run.
nullable: true
properties: *23
required: *24
title: Thread Truncation Controls
type: object
required: &182
- thread_id
- assistant_id
type: object
CreateThreadRequest:
additionalProperties: false
properties: *181
type: object
CreateTranscriptionRequest:
additionalProperties: false
properties: &183
file:
description: |
The audio file object (not file name) to transcribe, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
format: binary
type: string
x-oaiTypeLabel: file
language:
description: |
The language of the input audio. Supplying the input language in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) format will improve accuracy and latency.
type: string
model:
anyOf:
- type: string
- enum:
- whisper-1
type: string
description: |
ID of the model to use. Only `whisper-1` (which is powered by our open source Whisper V2 model) is currently available.
example: whisper-1
x-oaiTypeLabel: string
prompt:
description: |
An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should match the audio language.
type: string
response_format:
default: json
description: |
The format of the transcript output, in one of these options: `json`, `text`, `srt`, `verbose_json`, or `vtt`.
enum:
- json
- text
- srt
- verbose_json
- vtt
type: string
temperature:
default: 0
description: |
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
type: number
timestamp_granularities[]:
default:
- segment
description: |
The timestamp granularities to populate for this transcription. `response_format` must be set `verbose_json` to use timestamp granularities. Either or both of these options are supported: `word`, or `segment`. Note: There is no additional latency for segment timestamps, but generating word timestamps incurs additional latency.
items:
enum:
- word
- segment
type: string
type: array
required: &184
- file
- model
type: object
CreateTranscriptionResponseJson:
description: Represents a transcription response returned by model, based on
the provided input.
properties: &185
text:
description: The transcribed text.
type: string
required: &186
- text
type: object
CreateTranscriptionResponseVerboseJson:
description: Represents a verbose json transcription response returned by model,
based on the provided input.
properties: &187
duration:
description: The duration of the input audio.
type: string
language:
description: The language of the input audio.
type: string
segments:
description: Segments of the transcribed text and their corresponding details.
items:
properties: &188
avg_logprob:
description: Average logprob of the segment. If the value is lower
than -1, consider the logprobs failed.
format: float
type: number
compression_ratio:
description: Compression ratio of the segment. If the value is greater
than 2.4, consider the compression failed.
format: float
type: number
end:
description: End time of the segment in seconds.
format: float
type: number
id:
description: Unique identifier of the segment.
type: integer
no_speech_prob:
description: Probability of no speech in the segment. If the value
is higher than 1.0 and the `avg_logprob` is below -1, consider this
segment silent.
format: float
type: number
seek:
description: Seek offset of the segment.
type: integer
start:
description: Start time of the segment in seconds.
format: float
type: number
temperature:
description: Temperature parameter used for generating the segment.
format: float
type: number
text:
description: Text content of the segment.
type: string
tokens:
description: Array of token IDs for the text content.
items:
type: integer
type: array
required: &189
- id
- seek
- start
- end
- text
- tokens
- temperature
- avg_logprob
- compression_ratio
- no_speech_prob
type: object
type: array
text:
description: The transcribed text.
type: string
words:
description: Extracted words and their corresponding timestamps.
items:
properties: &190
end:
description: End time of the word in seconds.
format: float
type: number
start:
description: Start time of the word in seconds.
format: float
type: number
word:
description: The text content of the word.
type: string
required: &191
- word
- start
- end
type: object
type: array
required: &192
- language
- duration
- text
type: object
CreateTranslationRequest:
additionalProperties: false
properties: &193
file:
description: |
The audio file object (not file name) translate, in one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
format: binary
type: string
x-oaiTypeLabel: file
model:
anyOf:
- type: string
- enum:
- whisper-1
type: string
description: |
ID of the model to use. Only `whisper-1` (which is powered by our open source Whisper V2 model) is currently available.
example: whisper-1
x-oaiTypeLabel: string
prompt:
description: |
An optional text to guide the model's style or continue a previous audio segment. The [prompt](/docs/guides/speech-to-text/prompting) should be in English.
type: string
response_format:
default: json
description: |
The format of the transcript output, in one of these options: `json`, `text`, `srt`, `verbose_json`, or `vtt`.
type: string
temperature:
default: 0
description: |
The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use [log probability](https://en.wikipedia.org/wiki/Log_probability) to automatically increase the temperature until certain thresholds are hit.
type: number
required: &194
- file
- model
type: object
CreateTranslationResponseJson:
properties: &195
text:
type: string
required: &196
- text
type: object
CreateTranslationResponseVerboseJson:
properties: &197
duration:
description: The duration of the input audio.
type: string
language:
description: The language of the output translation (always `english`).
type: string
segments:
description: Segments of the translated text and their corresponding details.
items:
properties: *188
required: *189
type: object
type: array
text:
description: The translated text.
type: string
required: &198
- language
- duration
- text
type: object
CreateVectorStoreFileBatchRequest:
additionalProperties: false
properties: &199
file_ids:
description: A list of [File](/docs/api-reference/files) IDs that the vector
store should use. Useful for tools like `file_search` that can access
files.
items:
type: string
maxItems: 500
minItems: 1
type: array
required: &200
- file_ids
type: object
CreateVectorStoreFileRequest:
additionalProperties: false
properties: &201
file_id:
description: A [File](/docs/api-reference/files) ID that the vector store
should use. Useful for tools like `file_search` that can access files.
type: string
required: &202
- file_id
type: object
CreateVectorStoreRequest:
additionalProperties: false
properties: &203
expires_after:
description: The expiration policy for a vector store.
properties: &204
anchor:
description: 'Anchor timestamp after which the expiration policy applies.
Supported anchors: `last_active_at`.'
enum:
- last_active_at
type: string
days:
description: The number of days after the anchor time that the vector
store will expire.
maximum: 365
minimum: 1
type: integer
required: &205
- anchor
- days
title: Vector store expiration policy
type: object
file_ids:
description: A list of [File](/docs/api-reference/files) IDs that the vector
store should use. Useful for tools like `file_search` that can access
files.
items:
type: string
maxItems: 500
type: array
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
name:
description: The name of the vector store.
type: string
type: object
DeleteAssistantResponse:
properties: &206
deleted:
type: boolean
id:
type: string
object:
enum:
- assistant.deleted
type: string
required: &207
- id
- object
- deleted
type: object
DeleteFileResponse:
properties: &208
deleted:
type: boolean
id:
type: string
object:
enum:
- file
type: string
required: &209
- id
- object
- deleted
type: object
DeleteMessageResponse:
properties: &210
deleted:
type: boolean
id:
type: string
object:
enum:
- thread.message.deleted
type: string
required: &211
- id
- object
- deleted
type: object
DeleteModelResponse:
properties: &212
deleted:
type: boolean
id:
type: string
object:
type: string
required: &213
- id
- object
- deleted
type: object
DeleteThreadResponse:
properties: &214
deleted:
type: boolean
id:
type: string
object:
enum:
- thread.deleted
type: string
required: &215
- id
- object
- deleted
type: object
DeleteVectorStoreFileResponse:
properties: &216
deleted:
type: boolean
id:
type: string
object:
enum:
- vector_store.file.deleted
type: string
required: &217
- id
- object
- deleted
type: object
DeleteVectorStoreResponse:
properties: &218
deleted:
type: boolean
id:
type: string
object:
enum:
- vector_store.deleted
type: string
required: &219
- id
- object
- deleted
type: object
DoneEvent:
description: Occurs when a stream ends.
properties: *92
required: *93
type: object
Embedding:
description: |
Represents an embedding vector returned by embedding endpoint.
properties: *152
required: *153
type: object
Error:
properties: *89
required: *90
type: object
ErrorEvent:
description: Occurs when an [error](/docs/guides/error-codes/api-errors) occurs.
This can happen due to an internal server error or a timeout.
properties: *88
required: *91
type: object
ErrorResponse:
properties:
error:
properties: *89
required: *90
type: object
required:
- error
type: object
FineTuningIntegration:
properties: &220
type:
description: The type of the integration being enabled for the fine-tuning
job
enum:
- wandb
type: string
wandb:
description: |
The settings for your integration with Weights and Biases. This payload specifies the project that
metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags
to your run, and set a default entity (team, username, etc) to be associated with your run.
properties:
entity:
description: |
The entity to use for the run. This allows you to set the team or username of the WandB user that you would
like associated with the run. If not set, the default entity for the registered WandB API key is used.
nullable: true
type: string
name:
description: |
A display name to set for the run. If not set, we will use the Job ID as the name.
nullable: true
type: string
project:
description: |
The name of the project that the new run will be created under.
example: my-wandb-project
type: string
tags:
description: |
A list of tags to be attached to the newly created run. These tags are passed through directly to WandB. Some
default tags are generated by OpenAI: "openai/finetune", "openai/{base-model}", "openai/{ftjob-abcdef}".
items:
example: custom-tag
type: string
type: array
required:
- project
type: object
required: &221
- type
- wandb
title: Fine-Tuning Job Integration
type: object
FineTuningJob:
description: |
The `fine_tuning.job` object represents a fine-tuning job that has been created through the API.
properties: &222
created_at:
description: The Unix timestamp (in seconds) for when the fine-tuning job
was created.
type: integer
error:
description: For fine-tuning jobs that have `failed`, this will contain
more information on the cause of the failure.
nullable: true
properties:
code:
description: A machine-readable error code.
type: string
message:
description: A human-readable error message.
type: string
param:
description: The parameter that was invalid, usually `training_file`
or `validation_file`. This field will be null if the failure was not
parameter-specific.
nullable: true
type: string
required:
- code
- message
- param
type: object
estimated_finish:
description: The Unix timestamp (in seconds) for when the fine-tuning job
is estimated to finish. The value will be null if the fine-tuning job
is not running.
nullable: true
type: integer
fine_tuned_model:
description: The name of the fine-tuned model that is being created. The
value will be null if the fine-tuning job is still running.
nullable: true
type: string
finished_at:
description: The Unix timestamp (in seconds) for when the fine-tuning job
was finished. The value will be null if the fine-tuning job is still running.
nullable: true
type: integer
hyperparameters:
description: The hyperparameters used for the fine-tuning job. See the [fine-tuning
guide](/docs/guides/fine-tuning) for more details.
properties:
n_epochs:
default: auto
description: |-
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
"auto" decides the optimal number of epochs based on the size of the dataset. If setting the number manually, we support any number between 1 and 50 epochs.
oneOf:
- enum:
- auto
type: string
- maximum: 50
minimum: 1
type: integer
required:
- n_epochs
type: object
id:
description: The object identifier, which can be referenced in the API endpoints.
type: string
integrations:
description: A list of integrations to enable for this fine-tuning job.
items:
oneOf:
- properties: *220
required: *221
title: Fine-Tuning Job Integration
type: object
x-oaiExpandable: true
maxItems: 5
nullable: true
type: array
model:
description: The base model that is being fine-tuned.
type: string
object:
description: The object type, which is always "fine_tuning.job".
enum:
- fine_tuning.job
type: string
organization_id:
description: The organization that owns the fine-tuning job.
type: string
result_files:
description: The compiled results file ID(s) for the fine-tuning job. You
can retrieve the results with the [Files API](/docs/api-reference/files/retrieve-contents).
items:
example: file-abc123
type: string
type: array
seed:
description: The seed used for the fine-tuning job.
type: integer
status:
description: The current status of the fine-tuning job, which can be either
`validating_files`, `queued`, `running`, `succeeded`, `failed`, or `cancelled`.
enum:
- validating_files
- queued
- running
- succeeded
- failed
- cancelled
type: string
trained_tokens:
description: The total number of billable tokens processed by this fine-tuning
job. The value will be null if the fine-tuning job is still running.
nullable: true
type: integer
training_file:
description: The file ID used for training. You can retrieve the training
data with the [Files API](/docs/api-reference/files/retrieve-contents).
type: string
validation_file:
description: The file ID used for validation. You can retrieve the validation
results with the [Files API](/docs/api-reference/files/retrieve-contents).
nullable: true
type: string
required: &223
- created_at
- error
- finished_at
- fine_tuned_model
- hyperparameters
- id
- model
- object
- organization_id
- result_files
- status
- trained_tokens
- training_file
- validation_file
- seed
title: FineTuningJob
type: object
FineTuningJobCheckpoint:
description: |
The `fine_tuning.job.checkpoint` object represents a model checkpoint for a fine-tuning job that is ready to use.
properties: &224
created_at:
description: The Unix timestamp (in seconds) for when the checkpoint was
created.
type: integer
fine_tuned_model_checkpoint:
description: The name of the fine-tuned checkpoint model that is created.
type: string
fine_tuning_job_id:
description: The name of the fine-tuning job that this checkpoint was created
from.
type: string
id:
description: The checkpoint identifier, which can be referenced in the API
endpoints.
type: string
metrics:
description: Metrics at the step number during the fine-tuning job.
properties:
full_valid_loss:
type: number
full_valid_mean_token_accuracy:
type: number
step:
type: number
train_loss:
type: number
train_mean_token_accuracy:
type: number
valid_loss:
type: number
valid_mean_token_accuracy:
type: number
type: object
object:
description: The object type, which is always "fine_tuning.job.checkpoint".
enum:
- fine_tuning.job.checkpoint
type: string
step_number:
description: The step number that the checkpoint was created at.
type: integer
required: &225
- created_at
- fine_tuning_job_id
- fine_tuned_model_checkpoint
- id
- metrics
- object
- step_number
title: FineTuningJobCheckpoint
type: object
FineTuningJobEvent:
description: Fine-tuning job event object
properties: &226
created_at:
type: integer
id:
type: string
level:
enum:
- info
- warn
- error
type: string
message:
type: string
object:
enum:
- fine_tuning.job.event
type: string
required: &227
- id
- object
- created_at
- level
- message
type: object
FunctionObject:
properties: *9
required: *10
type: object
FunctionParameters:
additionalProperties: true
description: "The parameters the functions accepts, described as a JSON Schema
object. See the [guide](/docs/guides/function-calling) for examples, and the
[JSON Schema reference](https://json-schema.org/understanding-json-schema/)
for documentation about the format. \n\nOmitting `parameters` defines a function
with an empty parameter list."
type: object
Image:
description: Represents the url or the content of an image generated by the
OpenAI API.
properties: &228
b64_json:
description: The base64-encoded JSON of the generated image, if `response_format`
is `b64_json`.
type: string
revised_prompt:
description: The prompt that was used to generate the image, if there was
any revision to the prompt.
type: string
url:
description: The URL of the generated image, if `response_format` is `url`
(default).
type: string
type: object
ImagesResponse:
properties: &229
created:
type: integer
data:
items:
description: Represents the url or the content of an image generated by
the OpenAI API.
properties: *228
type: object
type: array
required: &230
- created
- data
ListAssistantsResponse:
properties: &231
data:
items:
description: Represents an `assistant` that can call the model and use
tools.
properties: *1
required: *12
title: Assistant
type: object
type: array
first_id:
example: asst_abc123
type: string
has_more:
example: false
type: boolean
last_id:
example: asst_abc456
type: string
object:
example: list
type: string
required: &232
- object
- data
- first_id
- last_id
- has_more
type: object
ListBatchesResponse:
properties: &233
data:
items:
properties: *94
required: *95
type: object
type: array
first_id:
example: batch_abc123
type: string
has_more:
type: boolean
last_id:
example: batch_abc456
type: string
object:
enum:
- list
type: string
required: &234
- object
- data
- has_more
type: object
ListFilesResponse:
properties: &235
data:
items:
description: The `File` object represents a document that has been uploaded
to OpenAI.
properties: &236
bytes:
description: The size of the file, in bytes.
type: integer
created_at:
description: The Unix timestamp (in seconds) for when the file was
created.
type: integer
filename:
description: The name of the file.
type: string
id:
description: The file identifier, which can be referenced in the API
endpoints.
type: string
object:
description: The object type, which is always `file`.
enum:
- file
type: string
purpose:
description: The intended purpose of the file. Supported values are
`assistants`, `assistants_output`, `batch`, `batch_output`, `fine-tune`,
`fine-tune-results` and `vision`.
enum:
- assistants
- assistants_output
- batch
- batch_output
- fine-tune
- fine-tune-results
- vision
type: string
status:
deprecated: true
description: Deprecated. The current status of the file, which can
be either `uploaded`, `processed`, or `error`.
enum:
- uploaded
- processed
- error
type: string
status_details:
deprecated: true
description: Deprecated. For details on why a fine-tuning training
file failed validation, see the `error` field on `fine_tuning.job`.
type: string
required: &237
- id
- object
- bytes
- created_at
- filename
- purpose
- status
title: OpenAIFile
type: array
object:
enum:
- list
type: string
required: &238
- object
- data
type: object
ListFineTuningJobCheckpointsResponse:
properties: &239
data:
items:
description: |
The `fine_tuning.job.checkpoint` object represents a model checkpoint for a fine-tuning job that is ready to use.
properties: *224
required: *225
title: FineTuningJobCheckpoint
type: object
type: array
first_id:
nullable: true
type: string
has_more:
type: boolean
last_id:
nullable: true
type: string
object:
enum:
- list
type: string
required: &240
- object
- data
- has_more
type: object
ListFineTuningJobEventsResponse:
properties: &241
data:
items:
description: Fine-tuning job event object
properties: *226
required: *227
type: object
type: array
object:
enum:
- list
type: string
required: &242
- object
- data
type: object
ListMessagesResponse:
properties: &243
data:
items:
description: Represents a message within a [thread](/docs/api-reference/threads).
properties: *64
required: *75
title: The message object
type: object
type: array
first_id:
example: msg_abc123
type: string
has_more:
example: false
type: boolean
last_id:
example: msg_abc123
type: string
object:
example: list
type: string
required: &244
- object
- data
- first_id
- last_id
- has_more
ListModelsResponse:
properties: &245
data:
items:
description: Describes an OpenAI model offering that can be used with
the API.
properties: &246
created:
description: The Unix timestamp (in seconds) when the model was created.
type: integer
id:
description: The model identifier, which can be referenced in the
API endpoints.
type: string
object:
description: The object type, which is always "model".
enum:
- model
type: string
owned_by:
description: The organization that owns the model.
type: string
required: &247
- id
- object
- created
- owned_by
title: Model
type: array
object:
enum:
- list
type: string
required: &248
- object
- data
type: object
ListPaginatedFineTuningJobsResponse:
properties: &249
data:
items:
description: |
The `fine_tuning.job` object represents a fine-tuning job that has been created through the API.
properties: *222
required: *223
title: FineTuningJob
type: object
type: array
has_more:
type: boolean
object:
enum:
- list
type: string
required: &250
- object
- data
- has_more
type: object
ListRunStepsResponse:
properties: &251
data:
items:
description: |
Represents a step in execution of a run.
properties: *29
required: *46
title: Run steps
type: object
type: array
first_id:
example: step_abc123
type: string
has_more:
example: false
type: boolean
last_id:
example: step_abc456
type: string
object:
example: list
type: string
required: &252
- object
- data
- first_id
- last_id
- has_more
ListRunsResponse:
properties: &253
data:
items:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
type: array
first_id:
example: run_abc123
type: string
has_more:
example: false
type: boolean
last_id:
example: run_abc456
type: string
object:
example: list
type: string
required: &254
- object
- data
- first_id
- last_id
- has_more
type: object
ListThreadsResponse:
properties:
data:
items:
description: Represents a thread that contains [messages](/docs/api-reference/messages).
properties: *14
required: *15
title: Thread
type: object
type: array
first_id:
example: asst_abc123
type: string
has_more:
example: false
type: boolean
last_id:
example: asst_abc456
type: string
object:
example: list
type: string
required:
- object
- data
- first_id
- last_id
- has_more
ListVectorStoreFilesResponse:
properties: &255
data:
items:
description: A list of files attached to a vector store.
properties: &256
created_at:
description: The Unix timestamp (in seconds) for when the vector store
file was created.
type: integer
id:
description: The identifier, which can be referenced in API endpoints.
type: string
last_error:
description: The last error associated with this vector store file.
Will be `null` if there are no errors.
nullable: true
properties:
code:
description: One of `server_error` or `rate_limit_exceeded`.
enum:
- internal_error
- file_not_found
- parsing_error
- unhandled_mime_type
type: string
message:
description: A human-readable description of the error.
type: string
required:
- code
- message
type: object
object:
description: The object type, which is always `vector_store.file`.
enum:
- vector_store.file
type: string
status:
description: The status of the vector store file, which can be either
`in_progress`, `completed`, `cancelled`, or `failed`. The status
`completed` indicates that the vector store file is ready for use.
enum:
- in_progress
- completed
- cancelled
- failed
type: string
usage_bytes:
description: The total vector store usage in bytes. Note that this
may be different from the original file size.
type: integer
vector_store_id:
description: The ID of the [vector store](/docs/api-reference/vector-stores/object)
that the [File](/docs/api-reference/files) is attached to.
type: string
required: &257
- id
- object
- usage_bytes
- created_at
- vector_store_id
- status
- last_error
title: Vector store files
type: object
type: array
first_id:
example: file-abc123
type: string
has_more:
example: false
type: boolean
last_id:
example: file-abc456
type: string
object:
example: list
type: string
required: &258
- object
- data
- first_id
- last_id
- has_more
ListVectorStoresResponse:
properties: &259
data:
items:
description: A vector store is a collection of processed files can be
used by the `file_search` tool.
properties: &260
created_at:
description: The Unix timestamp (in seconds) for when the vector store
was created.
type: integer
expires_after:
description: The expiration policy for a vector store.
properties: *204
required: *205
title: Vector store expiration policy
type: object
expires_at:
description: The Unix timestamp (in seconds) for when the vector store
will expire.
nullable: true
type: integer
file_counts:
properties:
cancelled:
description: The number of files that were cancelled.
type: integer
completed:
description: The number of files that have been successfully processed.
type: integer
failed:
description: The number of files that have failed to process.
type: integer
in_progress:
description: The number of files that are currently being processed.
type: integer
total:
description: The total number of files.
type: integer
required:
- in_progress
- completed
- failed
- cancelled
- total
type: object
id:
description: The identifier, which can be referenced in API endpoints.
type: string
last_active_at:
description: The Unix timestamp (in seconds) for when the vector store
was last active.
nullable: true
type: integer
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
name:
description: The name of the vector store.
type: string
object:
description: The object type, which is always `vector_store`.
enum:
- vector_store
type: string
status:
description: The status of the vector store, which can be either `expired`,
`in_progress`, or `completed`. A status of `completed` indicates
that the vector store is ready for use.
enum:
- expired
- in_progress
- completed
type: string
usage_bytes:
description: The total number of bytes used by the files in the vector
store.
type: integer
required: &261
- id
- object
- usage_bytes
- created_at
- status
- last_active_at
- name
- file_counts
- metadata
title: Vector store
type: object
type: array
first_id:
example: vs_abc123
type: string
has_more:
example: false
type: boolean
last_id:
example: vs_abc456
type: string
object:
example: list
type: string
required: &262
- object
- data
- first_id
- last_id
- has_more
MessageContentImageFileObject:
description: References an image [File](/docs/api-reference/files) in the content
of a message.
properties: *65
required: *66
title: Image file
type: object
MessageContentImageUrlObject:
description: References an image URL in the content of a message.
properties: *67
required: *68
title: Image URL
type: object
MessageContentTextAnnotationsFileCitationObject:
description: A citation within the message that points to a specific quote from
a specific File associated with the assistant or the message. Generated when
the assistant uses the "file_search" tool to search files.
properties: *70
required: *71
title: File citation
type: object
MessageContentTextAnnotationsFilePathObject:
description: A URL for the file that's generated when the assistant used the
`code_interpreter` tool to generate a file.
properties: *72
required: *73
title: File path
type: object
MessageContentTextObject:
description: The text content that is part of a message.
properties: *69
required: *74
title: Text
type: object
MessageDeltaContentImageFileObject:
description: References an image [File](/docs/api-reference/files) in the content
of a message.
properties: *77
required: *78
title: Image file
type: object
MessageDeltaContentImageUrlObject:
description: References an image URL in the content of a message.
properties: *85
required: *86
title: Image URL
type: object
MessageDeltaContentTextAnnotationsFileCitationObject:
description: A citation within the message that points to a specific quote from
a specific File associated with the assistant or the message. Generated when
the assistant uses the "file_search" tool to search files.
properties: *80
required: *81
title: File citation
type: object
MessageDeltaContentTextAnnotationsFilePathObject:
description: A URL for the file that's generated when the assistant used the
`code_interpreter` tool to generate a file.
properties: *82
required: *83
title: File path
type: object
MessageDeltaContentTextObject:
description: The text content that is part of a message.
properties: *79
required: *84
title: Text
type: object
MessageDeltaObject:
description: |
Represents a message delta i.e. any changed fields on a message during streaming.
properties: *76
required: *87
title: Message delta object
type: object
MessageObject:
description: Represents a message within a [thread](/docs/api-reference/threads).
properties: *64
required: *75
title: The message object
type: object
MessageRequestContentTextObject:
description: The text content that is part of a message.
properties: *169
required: *170
title: Text
type: object
MessageStreamEvent:
oneOf: *63
Model:
description: Describes an OpenAI model offering that can be used with the API.
properties: *246
required: *247
title: Model
ModifyAssistantRequest:
additionalProperties: false
properties: &263
description:
description: |
The description of the assistant. The maximum length is 512 characters.
maxLength: 512
nullable: true
type: string
instructions:
description: |
The system instructions that the assistant uses. The maximum length is 256,000 characters.
maxLength: 256000
nullable: true
type: string
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
model:
anyOf:
- type: string
description: |
ID of the model to use. You can use the [List models](/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](/docs/models/overview) for descriptions of them.
name:
description: |
The name of the assistant. The maximum length is 256 characters.
maxLength: 256
nullable: true
type: string
response_format:
description: |
Specifies the format that the model must output. Compatible with [GPT-4o](/docs/models/gpt-4o), [GPT-4 Turbo](/docs/models/gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since `gpt-3.5-turbo-1106`.
Setting to `{ "type": "json_object" }` enables JSON mode, which guarantees the message the model generates is valid JSON.
**Important:** when using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
nullable: true
oneOf: *2
x-oaiExpandable: true
temperature:
default: 1
description: |
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
example: 1
maximum: 2
minimum: 0
nullable: true
type: number
tool_resources:
description: |
A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
nullable: true
properties:
code_interpreter:
properties:
file_ids:
default: []
description: |
Overrides the list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool.
items:
type: string
maxItems: 20
type: array
type: object
file_search:
properties:
vector_store_ids:
description: |
Overrides the [vector store](/docs/api-reference/vector-stores/object) attached to this assistant. There can be a maximum of 1 vector store attached to the assistant.
items:
type: string
maxItems: 1
type: array
type: object
type: object
tools:
default: []
description: |
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types `code_interpreter`, `file_search`, or `function`.
items:
oneOf:
- properties: *4
required: *5
title: Code interpreter tool
type: object
- properties: *6
required: *7
title: FileSearch tool
type: object
- properties: *8
required: *11
title: Function tool
type: object
x-oaiExpandable: true
maxItems: 128
type: array
top_p:
default: 1
description: |
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
example: 1
maximum: 1
minimum: 0
nullable: true
type: number
type: object
ModifyMessageRequest:
additionalProperties: false
properties: &264
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
type: object
ModifyRunRequest:
additionalProperties: false
properties: &265
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
type: object
ModifyThreadRequest:
additionalProperties: false
properties: &266
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
tool_resources:
description: |
A set of resources that are made available to the assistant's tools in this thread. The resources are specific to the type of tool. For example, the `code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
nullable: true
properties:
code_interpreter:
properties:
file_ids:
default: []
description: |
A list of [file](/docs/api-reference/files) IDs made available to the `code_interpreter` tool. There can be a maximum of 20 files associated with the tool.
items:
type: string
maxItems: 20
type: array
type: object
file_search:
properties:
vector_store_ids:
description: |
The [vector store](/docs/api-reference/vector-stores/object) attached to this thread. There can be a maximum of 1 vector store attached to the thread.
items:
type: string
maxItems: 1
type: array
type: object
type: object
type: object
OpenAIFile:
description: The `File` object represents a document that has been uploaded
to OpenAI.
properties: *236
required: *237
title: OpenAIFile
RunCompletionUsage:
description: Usage statistics related to the run. This value will be `null`
if the run is not in a terminal state (i.e. `in_progress`, `queued`, etc.).
nullable: true
properties: *25
required: *26
type: object
RunObject:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
RunStepCompletionUsage:
description: Usage statistics related to the run step. This value will be `null`
while the run step's status is `in_progress`.
nullable: true
properties: *44
required: *45
type: object
RunStepDeltaObject:
description: |
Represents a run step delta i.e. any changed fields on a run step during streaming.
properties: *47
required: *62
title: Run step delta object
type: object
RunStepDeltaStepDetailsMessageCreationObject:
description: Details of the message creation by the run step.
properties: *48
required: *49
title: Message creation
type: object
RunStepDeltaStepDetailsToolCallsCodeObject:
description: Details of the Code Interpreter tool call the run step was involved
in.
properties: *51
required: *56
title: Code interpreter tool call
type: object
RunStepDeltaStepDetailsToolCallsCodeOutputImageObject:
properties: *54
required: *55
title: Code interpreter image output
type: object
RunStepDeltaStepDetailsToolCallsCodeOutputLogsObject:
description: Text output from the Code Interpreter tool call as part of a run
step.
properties: *52
required: *53
title: Code interpreter log output
type: object
RunStepDeltaStepDetailsToolCallsFileSearchObject:
properties: *57
required: *58
title: File search tool call
type: object
RunStepDeltaStepDetailsToolCallsFunctionObject:
properties: *59
required: *60
title: Function tool call
type: object
RunStepDeltaStepDetailsToolCallsObject:
description: Details of the tool call.
properties: *50
required: *61
title: Tool calls
type: object
RunStepDetailsMessageCreationObject:
description: Details of the message creation by the run step.
properties: *30
required: *31
title: Message creation
type: object
RunStepDetailsToolCallsCodeObject:
description: Details of the Code Interpreter tool call the run step was involved
in.
properties: *33
required: *38
title: Code Interpreter tool call
type: object
RunStepDetailsToolCallsCodeOutputImageObject:
properties: *36
required: *37
title: Code Interpreter image output
type: object
RunStepDetailsToolCallsCodeOutputLogsObject:
description: Text output from the Code Interpreter tool call as part of a run
step.
properties: *34
required: *35
title: Code Interpreter log output
type: object
RunStepDetailsToolCallsFileSearchObject:
properties: *39
required: *40
title: File search tool call
type: object
RunStepDetailsToolCallsFunctionObject:
properties: *41
required: *42
title: Function tool call
type: object
RunStepDetailsToolCallsObject:
description: Details of the tool call.
properties: *32
required: *43
title: Tool calls
type: object
RunStepObject:
description: |
Represents a step in execution of a run.
properties: *29
required: *46
title: Run steps
type: object
RunStepStreamEvent:
oneOf: *28
RunStreamEvent:
oneOf: *16
RunToolCallObject:
description: Tool call objects
properties: *18
required: *19
type: object
SubmitToolOutputsRunRequest:
additionalProperties: false
properties: &267
stream:
description: |
If `true`, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a `data: [DONE]` message.
nullable: true
type: boolean
tool_outputs:
description: A list of tools for which the outputs are being submitted.
items:
properties:
output:
description: The output of the tool call to be submitted to continue
the run.
type: string
tool_call_id:
description: The ID of the tool call in the `required_action` object
within the run object the output is being submitted for.
type: string
type: object
type: array
required: &268
- tool_outputs
type: object
ThreadObject:
description: Represents a thread that contains [messages](/docs/api-reference/messages).
properties: *14
required: *15
title: Thread
type: object
ThreadStreamEvent:
oneOf: *13
TranscriptionSegment:
properties: *188
required: *189
type: object
TranscriptionWord:
properties: *190
required: *191
type: object
TruncationObject:
description: Controls for how a thread will be truncated prior to the run. Use
this to control the intial context window of the run.
properties: *23
required: *24
title: Thread Truncation Controls
type: object
UpdateVectorStoreRequest:
additionalProperties: false
properties: &269
expires_after:
description: The expiration policy for a vector store.
nullable: true
properties: *204
required: *205
title: Vector store expiration policy
type: object
metadata:
description: |
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
nullable: true
type: object
x-oaiTypeLabel: map
name:
description: The name of the vector store.
nullable: true
type: string
type: object
VectorStoreExpirationAfter:
description: The expiration policy for a vector store.
properties: *204
required: *205
title: Vector store expiration policy
type: object
VectorStoreFileBatchObject:
description: A batch of files attached to a vector store.
properties: &270
created_at:
description: The Unix timestamp (in seconds) for when the vector store files
batch was created.
type: integer
file_counts:
properties:
cancelled:
description: The number of files that where cancelled.
type: integer
completed:
description: The number of files that have been processed.
type: integer
failed:
description: The number of files that have failed to process.
type: integer
in_progress:
description: The number of files that are currently being processed.
type: integer
total:
description: The total number of files.
type: integer
required:
- in_progress
- completed
- cancelled
- failed
- total
type: object
id:
description: The identifier, which can be referenced in API endpoints.
type: string
object:
description: The object type, which is always `vector_store.file_batch`.
enum:
- vector_store.files_batch
type: string
status:
description: The status of the vector store files batch, which can be either
`in_progress`, `completed`, `cancelled` or `failed`.
enum:
- in_progress
- completed
- cancelled
- failed
type: string
vector_store_id:
description: The ID of the [vector store](/docs/api-reference/vector-stores/object)
that the [File](/docs/api-reference/files) is attached to.
type: string
required: &271
- id
- object
- created_at
- vector_store_id
- status
- file_counts
title: Vector store file batch
type: object
VectorStoreFileObject:
description: A list of files attached to a vector store.
properties: *256
required: *257
title: Vector store files
type: object
VectorStoreObject:
description: A vector store is a collection of processed files can be used by
the `file_search` tool.
properties: *260
required: *261
title: Vector store
type: object
securitySchemes:
ApiKeyAuth:
scheme: bearer
type: http
info:
contact:
name: OpenAI Support
url: https://help.openai.com/
description: The OpenAI REST API. Please see https://platform.openai.com/docs/api-reference
for more details.
license:
name: MIT
url: https://github.com/openai/openai-openapi/blob/master/LICENSE
termsOfService: https://openai.com/policies/terms-of-use
title: OpenAI API
version: 2.0.0
openapi: 3.0.0
paths:
/assistants:
get:
operationId: listAssistants
parameters:
- description: |
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
in: query
name: limit
required: false
schema:
default: 20
type: integer
- description: |
Sort order by the C timestamp of the objects. C for
ascending order and C for descending order.
in: query
name: order
schema:
default: desc
enum:
- asc
- desc
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the next
page of the list.
in: query
name: after
schema:
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the previous
page of the list.
in: query
name: before
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *231
required: *232
type: object
description: OK
summary: Returns a list of assistants.
tags:
- Assistants
post:
operationId: createAssistant
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *137
required: *138
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: Represents an `assistant` that can call the model and
use tools.
properties: *1
required: *12
title: Assistant
type: object
description: OK
summary: Create an assistant with a model and instructions.
tags:
- Assistants
/assistants/{assistant_id}:
delete:
operationId: deleteAssistant
parameters:
- description: |
The ID of the assistant to delete.
in: path
name: assistant_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *206
required: *207
type: object
description: OK
summary: Delete an assistant.
tags:
- Assistants
get:
operationId: getAssistant
parameters:
- description: |
The ID of the assistant to retrieve.
in: path
name: assistant_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
description: Represents an `assistant` that can call the model and
use tools.
properties: *1
required: *12
title: Assistant
type: object
description: OK
summary: Retrieves an assistant.
tags:
- Assistants
post:
operationId: modifyAssistant
parameters:
- description: |
The ID of the assistant to modify.
in: path
name: assistant_id
required: true
schema:
type: string
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *263
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: Represents an `assistant` that can call the model and
use tools.
properties: *1
required: *12
title: Assistant
type: object
description: OK
summary: Modifies an assistant.
tags:
- Assistants
/audio/speech:
post:
operationId: createSpeech
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *178
required: *179
type: object
required: true
responses:
'200':
content:
application/octet-stream:
schema:
format: binary
type: string
description: OK
headers:
Transfer-Encoding:
description: chunked
schema:
type: string
summary: Generates audio from the input text.
tags:
- Audio
/audio/transcriptions:
post:
operationId: createTranscription
requestBody:
content:
multipart/form-data:
schema:
additionalProperties: false
properties: *183
required: *184
type: object
required: true
responses:
'200':
content:
application/json:
schema:
oneOf:
- description: Represents a transcription response returned by model,
based on the provided input.
properties: *185
required: *186
type: object
- description: Represents a verbose json transcription response returned
by model, based on the provided input.
properties: *187
required: *192
type: object
description: OK
summary: Transcribes audio into the input language.
tags:
- Audio
/audio/translations:
post:
operationId: createTranslation
requestBody:
content:
multipart/form-data:
schema:
additionalProperties: false
properties: *193
required: *194
type: object
required: true
responses:
'200':
content:
application/json:
schema:
oneOf:
- properties: *195
required: *196
type: object
- properties: *197
required: *198
type: object
description: OK
summary: Translates audio into English.
tags:
- Audio
/batches:
get:
operationId: listBatches
parameters:
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the next
page of the list.
in: query
name: after
required: false
schema:
type: string
- description: |
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
in: query
name: limit
required: false
schema:
default: 20
type: integer
responses:
'200':
content:
application/json:
schema:
properties: *233
required: *234
type: object
description: Batch listed successfully.
summary: List your organization's batches.
tags:
- Batch
post:
operationId: createBatch
requestBody:
content:
application/json:
schema:
properties:
completion_window:
description: The time frame within which the batch should be processed.
Currently only `24h` is supported.
enum:
- 24h
type: string
endpoint:
description: The endpoint to be used for all requests in the batch.
Currently `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions`
are supported. Note that `/v1/embeddings` batches are also restricted
to a maximum of 50,000 embedding inputs across all requests in
the batch.
enum:
- /v1/chat/completions
- /v1/embeddings
- /v1/completions
type: string
input_file_id:
description: |
The ID of an uploaded file that contains requests for the new batch.
See [upload file](/docs/api-reference/files/create) for how to upload a file.
Your input file must be formatted as a [JSONL file](/docs/api-reference/batch/requestInput), and must be uploaded with the purpose `batch`. The file can contain up to 50,000 requests, and can be up to 100 MB in size.
type: string
metadata:
additionalProperties:
type: string
description: Optional custom metadata for the batch.
nullable: true
type: object
required:
- input_file_id
- endpoint
- completion_window
type: object
required: true
responses:
'200':
content:
application/json:
schema:
properties: *94
required: *95
type: object
description: Batch created successfully.
summary: Creates and executes a batch from an uploaded file of requests
tags:
- Batch
/batches/{batch_id}:
get:
operationId: retrieveBatch
parameters:
- description: |
The ID of the batch to retrieve.
in: path
name: batch_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *94
required: *95
type: object
description: Batch retrieved successfully.
summary: Retrieves a batch.
tags:
- Batch
/batches/{batch_id}/cancel:
post:
operationId: cancelBatch
parameters:
- description: |
The ID of the batch to cancel.
in: path
name: batch_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *94
required: *95
type: object
description: Batch is cancelling. Returns the cancelling batch's details.
summary: Cancels an in-progress batch.
tags:
- Batch
/chat/completions:
post:
operationId: createChatCompletion
requestBody:
content:
application/json:
schema:
properties: *139
required: *141
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: Represents a chat completion response returned by model,
based on the provided input.
properties: *142
required: *144
type: object
description: OK
summary: Creates a model response for the given chat conversation.
tags:
- Chat
/completions:
post:
operationId: createCompletion
requestBody:
content:
application/json:
schema:
properties: *145
required: *146
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: |
Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).
properties: *147
required: *148
type: object
description: OK
summary: Creates a completion for the provided prompt and parameters.
tags:
- Completions
/embeddings:
post:
operationId: createEmbedding
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *149
required: *150
type: object
required: true
responses:
'200':
content:
application/json:
schema:
properties: *151
required: *154
type: object
description: OK
summary: Creates an embedding vector representing the input text.
tags:
- Embeddings
/files:
get:
operationId: listFiles
parameters:
- description: |
Only return files with the given purpose.
in: query
name: purpose
required: false
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *235
required: *238
type: object
description: OK
summary: Returns a list of files that belong to the user's organization.
tags:
- Files
post:
operationId: createFile
requestBody:
content:
multipart/form-data:
schema:
additionalProperties: false
properties: *155
required: *156
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: The `File` object represents a document that has been
uploaded to OpenAI.
properties: *236
required: *237
title: OpenAIFile
description: OK
summary: |
Upload a file that can be used across various endpoints. Individual files can be up to 512 MB, and the size of all files uploaded by one organization can be up to 100 GB.
The Assistants API supports files up to 2 million tokens and of specific file types. See the [Assistants Tools guide](/docs/assistants/tools) for details.
The Fine-tuning API only supports `.jsonl` files.
The Batch API only supports `.jsonl` files up to 100 MB in size.
Please [contact us](https://help.openai.com/) if you need to increase these storage limits.
tags:
- Files
/files/{file_id}:
delete:
operationId: deleteFile
parameters:
- description: |
The ID of the file to use for this request.
in: path
name: file_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *208
required: *209
type: object
description: OK
summary: Delete a file.
tags:
- Files
get:
operationId: retrieveFile
parameters:
- description: |
The ID of the file to use for this request.
in: path
name: file_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
description: The `File` object represents a document that has been
uploaded to OpenAI.
properties: *236
required: *237
title: OpenAIFile
description: OK
summary: Returns information about a specific file.
tags:
- Files
/files/{file_id}/content:
get:
operationId: downloadFile
parameters:
- description: |
The ID of the file to use for this request.
in: path
name: file_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
type: string
description: OK
summary: Returns the contents of the specified file.
tags:
- Files
/fine_tuning/jobs:
get:
operationId: listPaginatedFineTuningJobs
parameters:
- description: |
Identifier for the last job from the previous pagination request.
in: query
name: after
required: false
schema:
type: string
- description: |
Number of fine-tuning jobs to retrieve.
in: query
name: limit
required: false
schema:
default: 20
type: integer
responses:
'200':
content:
application/json:
schema:
properties: *249
required: *250
type: object
description: OK
summary: |
List your organization's fine-tuning jobs
tags:
- Fine-tuning
post:
operationId: createFineTuningJob
requestBody:
content:
application/json:
schema:
properties: *157
required: *158
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: |
The `fine_tuning.job` object represents a fine-tuning job that has been created through the API.
properties: *222
required: *223
title: FineTuningJob
type: object
description: OK
summary: |
Creates a fine-tuning job which begins the process of creating a new model from a given dataset.
Response includes details of the enqueued job including job status and the name of the fine-tuned models once complete.
[Learn more about fine-tuning](/docs/guides/fine-tuning)
tags:
- Fine-tuning
/fine_tuning/jobs/{fine_tuning_job_id}:
get:
operationId: retrieveFineTuningJob
parameters:
- description: |
The ID of the fine-tuning job.
in: path
name: fine_tuning_job_id
required: true
schema:
example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
type: string
responses:
'200':
content:
application/json:
schema:
description: |
The `fine_tuning.job` object represents a fine-tuning job that has been created through the API.
properties: *222
required: *223
title: FineTuningJob
type: object
description: OK
summary: |
Get info about a fine-tuning job.
[Learn more about fine-tuning](/docs/guides/fine-tuning)
tags:
- Fine-tuning
/fine_tuning/jobs/{fine_tuning_job_id}/cancel:
post:
operationId: cancelFineTuningJob
parameters:
- description: |
The ID of the fine-tuning job to cancel.
in: path
name: fine_tuning_job_id
required: true
schema:
example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
type: string
responses:
'200':
content:
application/json:
schema:
description: |
The `fine_tuning.job` object represents a fine-tuning job that has been created through the API.
properties: *222
required: *223
title: FineTuningJob
type: object
description: OK
summary: |
Immediately cancel a fine-tune job.
tags:
- Fine-tuning
/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints:
get:
operationId: listFineTuningJobCheckpoints
parameters:
- description: |
The ID of the fine-tuning job to get checkpoints for.
in: path
name: fine_tuning_job_id
required: true
schema:
example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
type: string
- description: |
Identifier for the last checkpoint ID from the previous pagination
request.
in: query
name: after
required: false
schema:
type: string
- description: |
Number of checkpoints to retrieve.
in: query
name: limit
required: false
schema:
default: 10
type: integer
responses:
'200':
content:
application/json:
schema:
properties: *239
required: *240
type: object
description: OK
summary: |
List checkpoints for a fine-tuning job.
tags:
- Fine-tuning
/fine_tuning/jobs/{fine_tuning_job_id}/events:
get:
operationId: listFineTuningEvents
parameters:
- description: |
The ID of the fine-tuning job to get events for.
in: path
name: fine_tuning_job_id
required: true
schema:
example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
type: string
- description: |
Identifier for the last event from the previous pagination request.
in: query
name: after
required: false
schema:
type: string
- description: "Number of events to retrieve.\n"
in: query
name: limit
required: false
schema:
default: 20
type: integer
responses:
'200':
content:
application/json:
schema:
properties: *241
required: *242
type: object
description: OK
summary: |
Get status updates for a fine-tuning job.
tags:
- Fine-tuning
/images/edits:
post:
operationId: createImageEdit
requestBody:
content:
multipart/form-data:
schema:
properties: *159
required: *162
type: object
required: true
responses:
'200':
content:
application/json:
schema:
properties: *229
required: *230
description: OK
summary: Creates an edited or extended image given an original image and a prompt.
tags:
- Images
/images/generations:
post:
operationId: createImage
requestBody:
content:
application/json:
schema:
properties: *163
required: *165
type: object
required: true
responses:
'200':
content:
application/json:
schema:
properties: *229
required: *230
description: OK
summary: Creates an image given a prompt.
tags:
- Images
/images/variations:
post:
operationId: createImageVariation
requestBody:
content:
multipart/form-data:
schema:
properties: *166
required: *167
type: object
required: true
responses:
'200':
content:
application/json:
schema:
properties: *229
required: *230
description: OK
summary: Creates a variation of a given image.
tags:
- Images
/models:
get:
operationId: listModels
responses:
'200':
content:
application/json:
schema:
properties: *245
required: *248
type: object
description: OK
summary: Lists the currently available models, and provides basic information
about each one such as the owner and availability.
tags:
- Models
/models/{model}:
delete:
operationId: deleteModel
parameters:
- description: "The model to delete\n"
in: path
name: model
required: true
schema:
example: ft:gpt-3.5-turbo:acemeco:suffix:abc123
type: string
responses:
'200':
content:
application/json:
schema:
properties: *212
required: *213
type: object
description: OK
summary: Delete a fine-tuned model. You must have the Owner role in your organization
to delete a model.
tags:
- Models
get:
operationId: retrieveModel
parameters:
- description: |
The ID of the model to use for this request
in: path
name: model
required: true
schema:
example: gpt-3.5-turbo
type: string
responses:
'200':
content:
application/json:
schema:
description: Describes an OpenAI model offering that can be used with
the API.
properties: *246
required: *247
title: Model
description: OK
summary: Retrieves a model instance, providing basic information about the model
such as the owner and permissioning.
tags:
- Models
/moderations:
post:
operationId: createModeration
requestBody:
content:
application/json:
schema:
properties: *172
required: *173
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: Represents if a given text input is potentially harmful.
properties: *174
required: *175
type: object
description: OK
summary: Classifies if text is potentially harmful.
tags:
- Moderations
/threads:
post:
operationId: createThread
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *181
type: object
responses:
'200':
content:
application/json:
schema:
description: Represents a thread that contains [messages](/docs/api-reference/messages).
properties: *14
required: *15
title: Thread
type: object
description: OK
summary: Create a thread.
tags:
- Assistants
/threads/runs:
post:
operationId: createThreadAndRun
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *180
required: *182
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
description: OK
summary: Create a thread and run it in one request.
tags:
- Assistants
/threads/{thread_id}:
delete:
operationId: deleteThread
parameters:
- description: |
The ID of the thread to delete.
in: path
name: thread_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *214
required: *215
type: object
description: OK
summary: Delete a thread.
tags:
- Assistants
get:
operationId: getThread
parameters:
- description: |
The ID of the thread to retrieve.
in: path
name: thread_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
description: Represents a thread that contains [messages](/docs/api-reference/messages).
properties: *14
required: *15
title: Thread
type: object
description: OK
summary: Retrieves a thread.
tags:
- Assistants
post:
operationId: modifyThread
parameters:
- description: |
The ID of the thread to modify. Only the C<metadata> can be modified.
in: path
name: thread_id
required: true
schema:
type: string
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *266
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: Represents a thread that contains [messages](/docs/api-reference/messages).
properties: *14
required: *15
title: Thread
type: object
description: OK
summary: Modifies a thread.
tags:
- Assistants
/threads/{thread_id}/messages:
get:
operationId: listMessages
parameters:
- description: |
The ID of the
L<thread|https://platform.openai.com/docs/api-reference/threads> the
messages belong to.
in: path
name: thread_id
required: true
schema:
type: string
- description: |
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
in: query
name: limit
required: false
schema:
default: 20
type: integer
- description: |
Sort order by the C timestamp of the objects. C for
ascending order and C for descending order.
in: query
name: order
schema:
default: desc
enum:
- asc
- desc
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the next
page of the list.
in: query
name: after
schema:
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the previous
page of the list.
in: query
name: before
schema:
type: string
- description: |
Filter messages by the run ID that generated them.
in: query
name: run_id
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *243
required: *244
description: OK
summary: Returns a list of messages for a given thread.
tags:
- Assistants
post:
operationId: createMessage
parameters:
- description: |
The ID of the
L<thread|https://platform.openai.com/docs/api-reference/threads> to
create a message for.
in: path
name: thread_id
required: true
schema:
type: string
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *168
required: *171
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: Represents a message within a [thread](/docs/api-reference/threads).
properties: *64
required: *75
title: The message object
type: object
description: OK
summary: Create a message.
tags:
- Assistants
/threads/{thread_id}/messages/{message_id}:
delete:
operationId: deleteMessage
parameters:
- description: |
The ID of the thread to which this message belongs.
in: path
name: thread_id
required: true
schema:
type: string
- description: |
The ID of the message to delete.
in: path
name: message_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *210
required: *211
type: object
description: OK
summary: Deletes a message.
tags:
- Assistants
get:
operationId: getMessage
parameters:
- description: |
The ID of the
L<thread|https://platform.openai.com/docs/api-reference/threads> to
which this message belongs.
in: path
name: thread_id
required: true
schema:
type: string
- description: |
The ID of the message to retrieve.
in: path
name: message_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
description: Represents a message within a [thread](/docs/api-reference/threads).
properties: *64
required: *75
title: The message object
type: object
description: OK
summary: Retrieve a message.
tags:
- Assistants
post:
operationId: modifyMessage
parameters:
- description: |
The ID of the thread to which this message belongs.
in: path
name: thread_id
required: true
schema:
type: string
- description: |
The ID of the message to modify.
in: path
name: message_id
required: true
schema:
type: string
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *264
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: Represents a message within a [thread](/docs/api-reference/threads).
properties: *64
required: *75
title: The message object
type: object
description: OK
summary: Modifies a message.
tags:
- Assistants
/threads/{thread_id}/runs:
get:
operationId: listRuns
parameters:
- description: |
The ID of the thread the run belongs to.
in: path
name: thread_id
required: true
schema:
type: string
- description: |
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
in: query
name: limit
required: false
schema:
default: 20
type: integer
- description: |
Sort order by the C timestamp of the objects. C for
ascending order and C for descending order.
in: query
name: order
schema:
default: desc
enum:
- asc
- desc
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the next
page of the list.
in: query
name: after
schema:
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the previous
page of the list.
in: query
name: before
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *253
required: *254
type: object
description: OK
summary: Returns a list of runs belonging to a thread.
tags:
- Assistants
post:
operationId: createRun
parameters:
- description: "The ID of the thread to run.\n"
in: path
name: thread_id
required: true
schema:
type: string
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *176
required: *177
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
description: OK
summary: Create a run.
tags:
- Assistants
/threads/{thread_id}/runs/{run_id}:
get:
operationId: getRun
parameters:
- description: |
The ID of the
L<thread|https://platform.openai.com/docs/api-reference/threads> that
was run.
in: path
name: thread_id
required: true
schema:
type: string
- description: |
The ID of the run to retrieve.
in: path
name: run_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
description: OK
summary: Retrieves a run.
tags:
- Assistants
post:
operationId: modifyRun
parameters:
- description: |
The ID of the
L<thread|https://platform.openai.com/docs/api-reference/threads> that
was run.
in: path
name: thread_id
required: true
schema:
type: string
- description: "The ID of the run to modify.\n"
in: path
name: run_id
required: true
schema:
type: string
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *265
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
description: OK
summary: Modifies a run.
tags:
- Assistants
/threads/{thread_id}/runs/{run_id}/cancel:
post:
operationId: cancelRun
parameters:
- description: |
The ID of the thread to which this run belongs.
in: path
name: thread_id
required: true
schema:
type: string
- description: "The ID of the run to cancel.\n"
in: path
name: run_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
description: OK
summary: Cancels a run that is `in_progress`.
tags:
- Assistants
/threads/{thread_id}/runs/{run_id}/steps:
get:
operationId: listRunSteps
parameters:
- description: |
The ID of the thread the run and run steps belong to.
in: path
name: thread_id
required: true
schema:
type: string
- description: |
The ID of the run the run steps belong to.
in: path
name: run_id
required: true
schema:
type: string
- description: |
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
in: query
name: limit
required: false
schema:
default: 20
type: integer
- description: |
Sort order by the C timestamp of the objects. C for
ascending order and C for descending order.
in: query
name: order
schema:
default: desc
enum:
- asc
- desc
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the next
page of the list.
in: query
name: after
schema:
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the previous
page of the list.
in: query
name: before
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *251
required: *252
description: OK
summary: Returns a list of run steps belonging to a run.
tags:
- Assistants
/threads/{thread_id}/runs/{run_id}/steps/{step_id}:
get:
operationId: getRunStep
parameters:
- description: |
The ID of the thread to which the run and run step belongs.
in: path
name: thread_id
required: true
schema:
type: string
- description: |
The ID of the run to which the run step belongs.
in: path
name: run_id
required: true
schema:
type: string
- description: |
The ID of the run step to retrieve.
in: path
name: step_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
description: |
Represents a step in execution of a run.
properties: *29
required: *46
title: Run steps
type: object
description: OK
summary: Retrieves a run step.
tags:
- Assistants
/threads/{thread_id}/runs/{run_id}/submit_tool_outputs:
post:
operationId: submitToolOuputsToRun
parameters:
- description: |
The ID of the
L<thread|https://platform.openai.com/docs/api-reference/threads> to
which this run belongs.
in: path
name: thread_id
required: true
schema:
type: string
- description: |
The ID of the run that requires the tool output submission.
in: path
name: run_id
required: true
schema:
type: string
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *267
required: *268
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: Represents an execution run on a [thread](/docs/api-reference/threads).
properties: *17
required: *27
title: A run on a thread
type: object
description: OK
summary: |
When a run has the `status: "requires_action"` and `required_action.type` is `submit_tool_outputs`, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.
tags:
- Assistants
/vector_stores:
get:
operationId: listVectorStores
parameters:
- description: |
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
in: query
name: limit
required: false
schema:
default: 20
type: integer
- description: |
Sort order by the C timestamp of the objects. C for
ascending order and C for descending order.
in: query
name: order
schema:
default: desc
enum:
- asc
- desc
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the next
page of the list.
in: query
name: after
schema:
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the previous
page of the list.
in: query
name: before
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *259
required: *262
description: OK
summary: Returns a list of vector stores.
tags:
- Vector Stores
post:
operationId: createVectorStore
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *203
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: A vector store is a collection of processed files can
be used by the `file_search` tool.
properties: *260
required: *261
title: Vector store
type: object
description: OK
summary: Create a vector store.
tags:
- Vector Stores
/vector_stores/{vector_store_id}:
delete:
operationId: deleteVectorStore
parameters:
- description: |
The ID of the vector store to delete.
in: path
name: vector_store_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *218
required: *219
type: object
description: OK
summary: Delete a vector store.
tags:
- Vector Stores
get:
operationId: getVectorStore
parameters:
- description: |
The ID of the vector store to retrieve.
in: path
name: vector_store_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
description: A vector store is a collection of processed files can
be used by the `file_search` tool.
properties: *260
required: *261
title: Vector store
type: object
description: OK
summary: Retrieves a vector store.
tags:
- Vector Stores
post:
operationId: modifyVectorStore
parameters:
- description: |
The ID of the vector store to modify.
in: path
name: vector_store_id
required: true
schema:
type: string
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *269
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: A vector store is a collection of processed files can
be used by the `file_search` tool.
properties: *260
required: *261
title: Vector store
type: object
description: OK
summary: Modifies a vector store.
tags:
- Vector Stores
/vector_stores/{vector_store_id}/file_batches:
post:
operationId: createVectorStoreFileBatch
parameters:
- description: |
The ID of the vector store for which to create a File Batch.
in: path
name: vector_store_id
required: true
schema:
example: vs_abc123
type: string
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *199
required: *200
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: A batch of files attached to a vector store.
properties: *270
required: *271
title: Vector store file batch
type: object
description: OK
summary: Create a vector store file batch.
tags:
- Vector Stores
/vector_stores/{vector_store_id}/file_batches/{batch_id}:
get:
operationId: getVectorStoreFileBatch
parameters:
- description: |
The ID of the vector store that the file batch belongs to.
in: path
name: vector_store_id
required: true
schema:
example: vs_abc123
type: string
- description: |
The ID of the file batch being retrieved.
in: path
name: batch_id
required: true
schema:
example: vsfb_abc123
type: string
responses:
'200':
content:
application/json:
schema:
description: A batch of files attached to a vector store.
properties: *270
required: *271
title: Vector store file batch
type: object
description: OK
summary: Retrieves a vector store file batch.
tags:
- Vector Stores
/vector_stores/{vector_store_id}/file_batches/{batch_id}/cancel:
post:
operationId: cancelVectorStoreFileBatch
parameters:
- description: |
The ID of the vector store that the file batch belongs to.
in: path
name: vector_store_id
required: true
schema:
type: string
- description: |
The ID of the file batch to cancel.
in: path
name: batch_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
description: A batch of files attached to a vector store.
properties: *270
required: *271
title: Vector store file batch
type: object
description: OK
summary: Cancel a vector store file batch. This attempts to cancel the processing
of files in this batch as soon as possible.
tags:
- Vector Stores
/vector_stores/{vector_store_id}/file_batches/{batch_id}/files:
get:
operationId: listFilesInVectorStoreBatch
parameters:
- description: |
The ID of the vector store that the files belong to.
in: path
name: vector_store_id
required: true
schema:
type: string
- description: |
The ID of the file batch that the files belong to.
in: path
name: batch_id
required: true
schema:
type: string
- description: |
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
in: query
name: limit
required: false
schema:
default: 20
type: integer
- description: |
Sort order by the C timestamp of the objects. C for
ascending order and C for descending order.
in: query
name: order
schema:
default: desc
enum:
- asc
- desc
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the next
page of the list.
in: query
name: after
schema:
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the previous
page of the list.
in: query
name: before
schema:
type: string
- description: |
Filter by file status. One of C<in_progress>, C<completed>, C<failed>,
C<cancelled>.
in: query
name: filter
schema:
enum:
- in_progress
- completed
- failed
- cancelled
type: string
responses:
'200':
content:
application/json:
schema:
properties: *255
required: *258
description: OK
summary: Returns a list of vector store files in a batch.
tags:
- Vector Stores
/vector_stores/{vector_store_id}/files:
get:
operationId: listVectorStoreFiles
parameters:
- description: |
The ID of the vector store that the files belong to.
in: path
name: vector_store_id
required: true
schema:
type: string
- description: |
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
in: query
name: limit
required: false
schema:
default: 20
type: integer
- description: |
Sort order by the C timestamp of the objects. C for
ascending order and C for descending order.
in: query
name: order
schema:
default: desc
enum:
- asc
- desc
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the next
page of the list.
in: query
name: after
schema:
type: string
- description: |
A cursor for use in pagination. C is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with objIfoo in order to fetch the previous
page of the list.
in: query
name: before
schema:
type: string
- description: |
Filter by file status. One of C<in_progress>, C<completed>, C<failed>,
C<cancelled>.
in: query
name: filter
schema:
enum:
- in_progress
- completed
- failed
- cancelled
type: string
responses:
'200':
content:
application/json:
schema:
properties: *255
required: *258
description: OK
summary: Returns a list of vector store files.
tags:
- Vector Stores
post:
operationId: createVectorStoreFile
parameters:
- description: |
The ID of the vector store for which to create a File.
in: path
name: vector_store_id
required: true
schema:
example: vs_abc123
type: string
requestBody:
content:
application/json:
schema:
additionalProperties: false
properties: *201
required: *202
type: object
required: true
responses:
'200':
content:
application/json:
schema:
description: A list of files attached to a vector store.
properties: *256
required: *257
title: Vector store files
type: object
description: OK
summary: Create a vector store file by attaching a [File](/docs/api-reference/files)
to a [vector store](/docs/api-reference/vector-stores/object).
tags:
- Vector Stores
/vector_stores/{vector_store_id}/files/{file_id}:
delete:
operationId: deleteVectorStoreFile
parameters:
- description: |
The ID of the vector store that the file belongs to.
in: path
name: vector_store_id
required: true
schema:
type: string
- description: "The ID of the file to delete.\n"
in: path
name: file_id
required: true
schema:
type: string
responses:
'200':
content:
application/json:
schema:
properties: *216
required: *217
type: object
description: OK
summary: Delete a vector store file. This will remove the file from the vector
store but the file itself will not be deleted. To delete the file, use the
[delete file](/docs/api-reference/files/delete) endpoint.
tags:
- Vector Stores
get:
operationId: getVectorStoreFile
parameters:
- description: |
The ID of the vector store that the file belongs to.
in: path
name: vector_store_id
required: true
schema:
example: vs_abc123
type: string
- description: |
The ID of the file being retrieved.
in: path
name: file_id
required: true
schema:
example: file-abc123
type: string
responses:
'200':
content:
application/json:
schema:
description: A list of files attached to a vector store.
properties: *256
required: *257
title: Vector store files
type: object
description: OK
summary: Retrieves a vector store file.
tags:
- Vector Stores
security:
- ApiKeyAuth: []
servers:
- url: https://api.openai.com/v1
tags:
- description: Build Assistants that can call models and use tools.
name: Assistants
- description: Learn how to turn audio into text or text into audio.
name: Audio
- description: Given a list of messages comprising a conversation, the model will
return a response.
name: Chat
- description: Given a prompt, the model will return one or more predicted completions,
and can also return the probabilities of alternative tokens at each position.
name: Completions
- description: Get a vector representation of a given input that can be easily consumed
by machine learning models and algorithms.
name: Embeddings
- description: Manage fine-tuning jobs to tailor a model to your specific training
data.
name: Fine-tuning
- description: Create large batches of API requests to run asynchronously.
name: Batch
- description: Files are used to upload documents that can be used with features like
Assistants and Fine-tuning.
name: Files
- description: Given a prompt and/or an input image, the model will generate a new
image.
name: Images
- description: List and describe the various models available in the API.
name: Models
- description: Given a input text, outputs if the model classifies it as potentially
harmful.
name: Moderations
COPYRIGHT AND LICENSE
Copyright (C) 2023-2024 by Nelson Ferraz
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself, either Perl version 5.14.0 or, at your option, any later version of Perl 5 you may have available.