Optional
apiOptional
maxThe maximum number of tokens that the model can process in a single response. This limits ensures computational efficiency and resource management.
Optional
modelThe name of the model to use.
Optional
modelThe name of the model to use.
Alias for model
Optional
stopUp to 4 sequences where the API will stop generating further tokens. The
returned text will not contain the stop sequence.
Alias for stopSequences
Optional
stopUp to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Optional
streamingWhether or not to stream responses.
Optional
temperatureThe temperature to use for sampling.
The Groq API key to use for requests.