Class MessageModeratorInputGuardrail

java.lang.Object
dev.langchain4j.guardrails.MessageModeratorInputGuardrail
All Implemented Interfaces:
Guardrail<InputGuardrailRequest, InputGuardrailResult>, InputGuardrail

public class MessageModeratorInputGuardrail extends Object implements InputGuardrail
An InputGuardrail that validates user messages using a ModerationModel to detect potentially harmful, inappropriate, or policy-violating content.

This guardrail checks incoming user messages for content that should be moderated, such as hate speech, violence, self-harm, sexual content, or other categories defined by the moderation model. If the message is flagged by the moderation model, the validation fails with a fatal result, preventing the message from being processed further.

This is useful for ensuring that user inputs comply with content policies before being sent to an LLM or processed by the application.