NAME
AI::Ollama::GenerateChatCompletionRequest -
SYNOPSIS
my $obj = AI::Ollama::GenerateChatCompletionRequest->new();
...
PROPERTIES
format
The format to return a response in. Currently the only accepted value is json.
Enable JSON mode by setting the format parameter to json. This will structure the response as valid JSON.
Note: it's important to instruct the model to use JSON in the prompt. Otherwise, the model may generate large amounts whitespace.
keep_alive
How long (in minutes) to keep the model loaded in memory.
- If set to a positive duration (e.g. 20), the model will stay loaded for the provided duration. - If set to a negative duration (e.g. -1), the model will stay loaded indefinitely. - If set to 0, the model will be unloaded immediately once finished. - If not set, the model will stay loaded for 5 minutes by default
messages
The messages of the chat, this can be used to keep a chat memory
model
The model name.
Model names follow a `model:tag` format. Some examples are `orca-mini:3b-q4_1` and `llama2:70b`. The tag is optional and, if not provided, will default to `latest`. The tag is used to identify a specific version.
options
Additional model parameters listed in the documentation for the Modelfile such as `temperature`.
stream
If `false` the response will be returned as a single response object, otherwise the response will be streamed as a series of objects.