ollama.generate(request)
request <Object>: The request object containing generate parameters.
model <string> The name of the model to use for the chat.prompt <string>: The prompt to send to the model.suffix <string>: (Optional) Suffix is the text that comes after the inserted text.system <string>: (Optional) Override the model system prompt.template <string>: (Optional) Override the model template.raw <boolean>: (Optional) Bypass the prompt template and pass the prompt directly to the model.images <Uint8Array[] | string[]>: (Optional) Images to be included, either as Uint8Array or base64 encoded strings.format <string>: (Optional) Set the expected format of the response (json).stream <boolean>: (Optional) When true an AsyncGenerator is returned.keep_alive <string | number>: (Optional) How long to keep the model loaded.options <Options>: (Optional) Options to configure the runtime.
- Returns:
<GenerateResponse>