Evaluate Prompt
POST
/evaluate_prompt/predict
Post Evaluate Prompt Predict
Evaluate an AI Prompt
Request Body
messages[role]Requiredarray<string>
messages[content]Requiredarray<string>
max_tokensinteger
Maximum number of output tokens, maximum 400
Default:300Format: "int32"temperaturenumber
How creative the response should be. Between 0 and 2, the lower the less creative
Format:"float"model_kindstring
Which model provider should be used
Default:"openai"Value in: "openai" | "anthropic"| Status code | Description |
|---|---|
200 | Evaluate an AI Prompt |
401 | Unauthorized |
404 | Not found |