Endpoints
Evaluate LLM Task
Evaluates LLM outputs using specified metrics.
Endpoint
POST /sdk/evaluate_llmRequest Structure
{
"request_body": {
"project_id": "your-project-id",
"secret_key": "your-secret-key",
"task_name": "Task Description",
"metrics": [
"RESPONSE_TONE",
"BIAS"
"READABILITY",
"COHERENCE",
"ANSWER_RELEVANCE",
"FACTUAL_CONSISTENCY",
"BLEU_SCORE"
],
"input_data": [
{
"prompt": "Your prompt text",
"context": "Additional context",
"response": "LLM response to evaluate"
}
]
}
}Optional Configuration
You can customize metric thresholds and labels:
Response Example
Error Responses
Status Code
Description
400
Bad Request - Invalid parameters
401
Unauthorized - Invalid credentials
429
Too Many Requests - Rate limit exceeded
500
Internal Server Error
Metric Types
For a complete list of available metrics and their descriptions, see our Metrics Documentation.
Last updated