Groq LLAMA3.3 70B Versatile

The 70B parameter version of Meta's Llama model delivers state of the art performance (running on Groq)

Model Specifications

Context window 131,072 tokens
Max output 8,000 tokens
Knowledge cutoff February 2024
Multipart messages No
Vision capabilities No

Default Parameters

Temperature 0.3
Top P 1.0
Frequency penalty 0.0
Presence penalty 0.0
Top K 50

Supported Parameters

Temperature

Supported

Controls randomness: Lower values make output more deterministic, higher values more creative.

Range: 0.0 - 1.0 Default: 0.3

More models from Meta