Blog
Replicate
STABLE

LLama 2 13B Chat

A 13 billion parameter language model from Meta, fine tuned for chat completions

Inputs
Prompt
Prompt to send to the model.
Debug
provide debugging output in logs
Max New Tokens
Maximum number of tokens to generate. A word is generally 2-3 tokens
Min New Tokens
Minimum number of tokens to generate. To disable, set to -1. A word is generally 2-3 tokens.
Replicate Weights
Path to fine-tuned weights produced by a Replicate fine-tune job.
Seed
Random seed. Leave blank to randomize the seed
Stop Sequences
A comma-separated list of sequences to stop generation at. For example, '<end>,<stop>' will stop generation at the first instance of 'end' or '<stop>'.
System Prompt
System prompt to send to the model. This is prepended to the prompt and helps guide system behavior.
Temperature
Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic, 0.75 is a good starting value.
Top K
When decoding text, samples from the top k most likely tokens; lower to ignore less likely tokens
Top P
When decoding text, samples from the top p percentage of most likely tokens; lower to ignore less likely tokens
Outputs
Collection
The output strings