Enum Class OTelGenAiAttributes
java.lang.Object
java.lang.Enum<OTelGenAiAttributes>
dev.langchain4j.micrometer.metrics.conventions.OTelGenAiAttributes
- All Implemented Interfaces:
Serializable, Comparable<OTelGenAiAttributes>, Constable
-
Nested Class Summary
Nested classes/interfaces inherited from class Enum
Enum.EnumDesc<E> -
Enum Constant Summary
Enum ConstantsEnum ConstantDescriptionThe type of error that occurred.The name of the operation being performed.The Generative AI provider as identified by the client or server instrumentation.The frequency penalty setting for the model request.The maximum number of tokens the model generates for a request.The name of the model a request is being made to.The presence penalty setting for the model request.List of sequences that the model will use to stop generating further tokens.The temperature setting for the model request.The top_k sampling setting for the model request.The top_p sampling setting for the model request.Reasons the model stopped generating tokens, corresponding to each generation received.The unique identifier for the AI response.The name of the model that generated the response.The GenAI server address.The GenAI server port.The type of token that is counted: input, output, total. -
Method Summary
Modifier and TypeMethodDescriptionvalue()static OTelGenAiAttributesReturns the enum constant of this class with the specified name.static OTelGenAiAttributes[]values()Returns an array containing the constants of this enum class, in the order they are declared.
-
Enum Constant Details
-
OPERATION_NAME
The name of the operation being performed. -
PROVIDER_NAME
The Generative AI provider as identified by the client or server instrumentation. -
TOKEN_TYPE
The type of token that is counted: input, output, total. -
REQUEST_MODEL
The name of the model a request is being made to. -
REQUEST_FREQUENCY_PENALTY
The frequency penalty setting for the model request. -
REQUEST_MAX_TOKENS
The maximum number of tokens the model generates for a request. -
REQUEST_PRESENCE_PENALTY
The presence penalty setting for the model request. -
REQUEST_STOP_SEQUENCES
List of sequences that the model will use to stop generating further tokens. -
REQUEST_TEMPERATURE
The temperature setting for the model request. -
REQUEST_TOP_K
The top_k sampling setting for the model request. -
REQUEST_TOP_P
The top_p sampling setting for the model request. -
RESPONSE_FINISH_REASONS
Reasons the model stopped generating tokens, corresponding to each generation received. -
RESPONSE_ID
The unique identifier for the AI response. -
RESPONSE_MODEL
The name of the model that generated the response. -
ERROR_TYPE
The type of error that occurred. -
SERVER_PORT
The GenAI server port. -
SERVER_ADDRESS
The GenAI server address.
-
-
Method Details
-
values
Returns an array containing the constants of this enum class, in the order they are declared.- Returns:
- an array containing the constants of this enum class, in the order they are declared
-
valueOf
Returns the enum constant of this class with the specified name. The string must match exactly an identifier used to declare an enum constant in this class. (Extraneous whitespace characters are not permitted.)- Parameters:
name- the name of the enum constant to be returned.- Returns:
- the enum constant with the specified name
- Throws:
IllegalArgumentException- if this enum class has no constant with the specified nameNullPointerException- if the argument is null
-
value
-