Bge-multilingual-gemma2

Embeddings

BGE-Multilingual-Gemma2 is a LLM-based multilingual embedding model. It is trained on a diverse range of languages and tasks. BGE-Multilingual-Gemma2 primarily demonstrates the following advancements: Diverse training data: The model's training data spans a broad range of languages, including English, Chinese, Japanese, Korean, French, and more.Additionally, the data covers a variety of task types, such as retrieval, classification, and clustering. Outstanding performance: The model exhibits state-of-the-art (SOTA) results on multilingual benchmarks like MIRACL, MTEB-pl, and MTEB-fr. It also achieves excellent performance on other major evaluations, including MTEB, C-MTEB and AIR-Bench.

À propos du modèle Bge-multilingual-gemma2

Publié sur huggingface

29/06/2024


Token envoyés

0.01 /Mtoken(entrée)


Taille du contexte
Inconnu
Paramètres
0.567B

Essayez le modèle.