Annotation Interface OutputGuardrails
AiServices
approach.
Am output guardrail is a rule that is applied to the output of the model to ensure that the output is safe and meets certain expectations.
When a validation fails, the result can indicate whether the request should be retried as-is, or to provide a
reprompt
message to append to the prompt.
In the case of re-prompting, the reprompt message is added to the LLM context and the request is then retried.
If the annotation is present on a class, the guardrails will be applied to all the methods of the class.
When several guardrails are applied, the order of the guardrails is important, as the guardrails are applied in the order they are listed.
When several OutputGuardrail
s are applied, if any guardrail forces a retry or reprompt, then all of the
guardrails will be re-applied to the new response.
-
Required Element Summary
Required ElementsModifier and TypeRequired ElementDescriptionClass<? extends OutputGuardrail>[]
The ordered list of guardrails to apply to the output of the model. -
Optional Element Summary
Optional ElementsModifier and TypeOptional ElementDescriptionint
The maximum number of retries to perform when an output guardrail forces a retry or reprompt.
-
Element Details
-
value
Class<? extends OutputGuardrail>[] valueThe ordered list of guardrails to apply to the output of the model.The order of the classes is important as the guardrails are applied in the order they are listed. Guardrails can not be present twice in the list.
-
maxRetries
int maxRetriesThe maximum number of retries to perform when an output guardrail forces a retry or reprompt.Set to
0
to disable retries- See Also:
- Default:
2
-