Enum Class WorkersAiChatModelName

java.lang.Object
java.lang.Enum<WorkersAiChatModelName>
dev.langchain4j.model.workersai.WorkersAiChatModelName
All Implemented Interfaces:
Serializable, Comparable<WorkersAiChatModelName>, Constable

public enum WorkersAiChatModelName extends Enum<WorkersAiChatModelName>
Enum for Workers AI Chat Model Name.
  • Nested Class Summary

    Nested classes/interfaces inherited from class java.lang.Enum

    Enum.EnumDesc<E extends Enum<E>>
  • Enum Constant Summary

    Enum Constants
    Enum Constant
    Description
    Instruct fine-tuned version of the Mistral-7b generative text model with 7 billion parameters.
    Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese..
    Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese..
    DeepSeekMath is initialized with DeepSeek-Coder-v1.5 7B and continues pre-training on math-related tokens sourced from Common Crawl, together with natural language and code data for 500B tokens.
    DiscoLM German 7b is a Mistral-based large language model with a focus on German-language applications.
    Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets.
    This is a Gemma-2B base model that Cloudflare dedicates for inference with LoRA adapters.
    This is a Gemma-7B base model that Cloudflare dedicates for inference with LoRA adapters.
    Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.
    Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes! Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
    Llama 2 13B Chat AWQ is an efficient, accurate and blazing-fast low-bit weight quantized Llama 2 variant.
    Quantized (int4) generative text model with 8 billion parameters from Meta.
    This is a Llama2 base model that Cloudflare dedicated for inference with LoRA adapters.
    Generation over generation, Meta Llama 3 demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including improved reasoning.
    Full precision (fp16) generative text model with 7 billion parameters from Met.
    Quantized (int8) generative text model with 7 billion parameters from Meta.
    Llama Guard is a model for classifying the safety of LLM prompts and responses, using a taxonomy of safety risks.
    Quantized (int4) generative text model with 8 billion parameters from Meta.
    DeepSeekMath-Instruct 7B is a mathematically instructed tuning model derived from DeepSeekMath-Base 7B.
    Mistral 7B Instruct v0.1 AWQ is an efficient, accurate and blazing-fast low-bit weight quantized Mistral variant.
    The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
    The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
    This model is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from the mistralai/Mistral-7B-v0.1 on the open source dataset Open-Orca/SlimOrca.
    OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning.
    OpenHermes 2.5 Mistral 7B is a state of the art Mistral Fine-tune, a continuation of OpenHermes 2 model, which trained on additional code datasets.
    Phi-2 is a Transformer-based model with a next-word prediction objective, trained on 1.4T tokens from multiple passes on a mixture of Synthetic and Web datasets for NLP and coding.
    Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud.
    Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud.
    Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud.
    Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud.
    This model is intended to be used by non-technical users to understand data inside their SQL databases.
    We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF).
    The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens.
    Cybertron 7B v2 is a 7B MistralAI based model, best on it’s series.
    Zephyr 7B Beta AWQ is an efficient, accurate and blazing-fast low-bit weight quantized Zephyr model variant.
  • Method Summary

    Modifier and Type
    Method
    Description
     
    Returns the enum constant of this class with the specified name.
    Returns an array containing the constants of this enum class, in the order they are declared.

    Methods inherited from class java.lang.Object

    getClass, notify, notifyAll, wait, wait, wait
  • Enum Constant Details

    • LLAMA2_7B_FULL

      public static final WorkersAiChatModelName LLAMA2_7B_FULL
      Full precision (fp16) generative text model with 7 billion parameters from Met.
    • LLAMA2_7B_QUANTIZED

      public static final WorkersAiChatModelName LLAMA2_7B_QUANTIZED
      Quantized (int8) generative text model with 7 billion parameters from Meta.
    • CODELLAMA_7B_AWQ

      public static final WorkersAiChatModelName CODELLAMA_7B_AWQ
      Instruct fine-tuned version of the Mistral-7b generative text model with 7 billion parameters.
    • DEEPSEEK_CODER_6_7_BASE

      public static final WorkersAiChatModelName DEEPSEEK_CODER_6_7_BASE
      Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese..
    • DEEPSEEK_CODER_MATH_7B_AWQ

      public static final WorkersAiChatModelName DEEPSEEK_CODER_MATH_7B_AWQ
      Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese..
    • DEEPSEEK_CODER_MATH_7B_INSTRUCT

      public static final WorkersAiChatModelName DEEPSEEK_CODER_MATH_7B_INSTRUCT
      DeepSeekMath is initialized with DeepSeek-Coder-v1.5 7B and continues pre-training on math-related tokens sourced from Common Crawl, together with natural language and code data for 500B tokens.
    • MISTRAL_7B_INSTRUCT

      public static final WorkersAiChatModelName MISTRAL_7B_INSTRUCT
      DeepSeekMath-Instruct 7B is a mathematically instructed tuning model derived from DeepSeekMath-Base 7B. DeepSeekMath is initialized with DeepSeek-Coder-v1.5 7B and continues pre-training on math-related tokens sourced from Common Crawl, together with natural language and code data for 500B tokens..
    • DISCOLM_GERMAN_7B_V1_AWQ

      public static final WorkersAiChatModelName DISCOLM_GERMAN_7B_V1_AWQ
      DiscoLM German 7b is a Mistral-based large language model with a focus on German-language applications. AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization.
    • FALCOM_7B_INSTRUCT

      public static final WorkersAiChatModelName FALCOM_7B_INSTRUCT
      Falcon-7B-Instruct is a 7B parameters causal decoder-only model built by TII based on Falcon-7B and finetuned on a mixture of chat/instruct datasets.
    • GEMMA_2B_IT_LORA

      public static final WorkersAiChatModelName GEMMA_2B_IT_LORA
      This is a Gemma-2B base model that Cloudflare dedicates for inference with LoRA adapters. Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.
    • GEMMA_7B_IT

      public static final WorkersAiChatModelName GEMMA_7B_IT
      Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants.
    • GEMMA_2B_IT_LORA_DUPLICATE

      public static final WorkersAiChatModelName GEMMA_2B_IT_LORA_DUPLICATE
      This is a Gemma-7B base model that Cloudflare dedicates for inference with LoRA adapters. Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models.
    • HERMES_2_PRO_MISTRAL_7B

      public static final WorkersAiChatModelName HERMES_2_PRO_MISTRAL_7B
      Hermes 2 Pro on Mistral 7B is the new flagship 7B Hermes! Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
    • LLAMA_2_13B_CHAT_AWQ

      public static final WorkersAiChatModelName LLAMA_2_13B_CHAT_AWQ
      Llama 2 13B Chat AWQ is an efficient, accurate and blazing-fast low-bit weight quantized Llama 2 variant.
    • LLAMA_2_7B_CHAT_HF_LORA

      public static final WorkersAiChatModelName LLAMA_2_7B_CHAT_HF_LORA
      This is a Llama2 base model that Cloudflare dedicated for inference with LoRA adapters. Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format.
    • LLAMA_3_8B_INSTRUCT

      public static final WorkersAiChatModelName LLAMA_3_8B_INSTRUCT
      Generation over generation, Meta Llama 3 demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including improved reasoning.
    • LLAMA_2_13B_CHAT_AWQ_DUPLICATE

      public static final WorkersAiChatModelName LLAMA_2_13B_CHAT_AWQ_DUPLICATE
      Quantized (int4) generative text model with 8 billion parameters from Meta.
    • LLAMAGUARD_7B_AWQ

      public static final WorkersAiChatModelName LLAMAGUARD_7B_AWQ
      Llama Guard is a model for classifying the safety of LLM prompts and responses, using a taxonomy of safety risks.
    • META_LLAMA_3_8B_INSTRUCT

      public static final WorkersAiChatModelName META_LLAMA_3_8B_INSTRUCT
      Quantized (int4) generative text model with 8 billion parameters from Meta.
    • MISTRAL_7B_INSTRUCT_V0_1_AWQ

      public static final WorkersAiChatModelName MISTRAL_7B_INSTRUCT_V0_1_AWQ
      Mistral 7B Instruct v0.1 AWQ is an efficient, accurate and blazing-fast low-bit weight quantized Mistral variant.
    • MISTRAL_7B_INSTRUCT_V0_2

      public static final WorkersAiChatModelName MISTRAL_7B_INSTRUCT_V0_2
      The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2. Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1: 32k context window (vs 8k context in v0.1), rope-theta = 1e6, and no Sliding-Window Attention.
    • MISTRAL_7B_INSTRUCT_V0_2_LORA

      public static final WorkersAiChatModelName MISTRAL_7B_INSTRUCT_V0_2_LORA
      The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
    • NEURAL_CHAT_7B_V3_1_AWQ

      public static final WorkersAiChatModelName NEURAL_CHAT_7B_V3_1_AWQ
      This model is a fine-tuned 7B parameter LLM on the Intel Gaudi 2 processor from the mistralai/Mistral-7B-v0.1 on the open source dataset Open-Orca/SlimOrca.
    • OPENCHAT_3_5_0106

      public static final WorkersAiChatModelName OPENCHAT_3_5_0106
      OpenChat is an innovative library of open-source language models, fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning.
    • OPENHERMES_2_5_MISTRAL_7B_AWQ

      public static final WorkersAiChatModelName OPENHERMES_2_5_MISTRAL_7B_AWQ
      OpenHermes 2.5 Mistral 7B is a state of the art Mistral Fine-tune, a continuation of OpenHermes 2 model, which trained on additional code datasets.
    • PHI_2

      public static final WorkersAiChatModelName PHI_2
      Phi-2 is a Transformer-based model with a next-word prediction objective, trained on 1.4T tokens from multiple passes on a mixture of Synthetic and Web datasets for NLP and coding.
    • QWEN1_5_0_5B_CHAT

      public static final WorkersAiChatModelName QWEN1_5_0_5B_CHAT
      Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud.
    • QWEN1_5_1_8B_CHAT

      public static final WorkersAiChatModelName QWEN1_5_1_8B_CHAT
      Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud.
    • QWEN1_5_14B_CHAT_AWQ

      public static final WorkersAiChatModelName QWEN1_5_14B_CHAT_AWQ
      Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud. AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization.
    • QWEN1_5_7B_CHAT_AWQ

      public static final WorkersAiChatModelName QWEN1_5_7B_CHAT_AWQ
      Qwen1.5 is the improved version of Qwen, the large language model series developed by Alibaba Cloud. AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization.
    • SQLCODER_7B_2

      public static final WorkersAiChatModelName SQLCODER_7B_2
      This model is intended to be used by non-technical users to understand data inside their SQL databases.
    • STARLING_LM_7B_BETA

      public static final WorkersAiChatModelName STARLING_LM_7B_BETA
      We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106 with our new reward model Nexusflow/Starling-RM-34B and policy optimization method Fine-Tuning Language Models from Human Preferences (PPO).
    • TINYLLAMA_1_1B_CHAT_V1_0

      public static final WorkersAiChatModelName TINYLLAMA_1_1B_CHAT_V1_0
      The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. This is the chat model finetuned on top of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T.
    • UNA_CYBERTRON_7B_V2_BF16

      public static final WorkersAiChatModelName UNA_CYBERTRON_7B_V2_BF16
      Cybertron 7B v2 is a 7B MistralAI based model, best on it’s series. It was trained with SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets.
    • ZEPHYR_7B_BETA_AWQ

      public static final WorkersAiChatModelName ZEPHYR_7B_BETA_AWQ
      Zephyr 7B Beta AWQ is an efficient, accurate and blazing-fast low-bit weight quantized Zephyr model variant.
  • Method Details

    • values

      public static WorkersAiChatModelName[] values()
      Returns an array containing the constants of this enum class, in the order they are declared.
      Returns:
      an array containing the constants of this enum class, in the order they are declared
    • valueOf

      public static WorkersAiChatModelName valueOf(String name)
      Returns the enum constant of this class with the specified name. The string must match exactly an identifier used to declare an enum constant in this class. (Extraneous whitespace characters are not permitted.)
      Parameters:
      name - the name of the enum constant to be returned.
      Returns:
      the enum constant with the specified name
      Throws:
      IllegalArgumentException - if this enum class has no constant with the specified name
      NullPointerException - if the argument is null
    • toString

      public String toString()
      Overrides:
      toString in class Enum<WorkersAiChatModelName>