Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new models nvidia, gte, linq #1436

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

AlexeyVatolin
Copy link
Contributor

Added two gte, nvidia and linq models to model registry

Checklist

  • Run tests locally to make sure nothing is broken using make test.
  • Run the formatter to format the code using make lint.

Adding a model checklist

  • I have filled out the ModelMeta object to the extent possible
  • I have ensured that my model can be loaded using
    • mteb.get_model(model_name, revision) and
    • mteb.get_model_meta(model_name, revision)
  • I have tested the implementation works on a representative set of tasks.

For testing I took the examples from the huggingface model card and compared the results with the results from the mteb model registry. In all new models, the text distance scores matches in at least 3 figures

Copy link
Collaborator

@Samoed Samoed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great changes! Can you submit results?

mteb/models/nvidia_models.py Outdated Show resolved Hide resolved
mteb/models/nvidia_models.py Show resolved Hide resolved
@AlexeyVatolin
Copy link
Contributor Author

Great changes! Can you submit results?

What do you mean, results on one arbitrary task from English MTEB?

@Samoed
Copy link
Collaborator

Samoed commented Nov 11, 2024

On some tasks from leaderboard to make sure that implementation matching

mteb/models/gte_models.py Outdated Show resolved Hide resolved
@AlexeyVatolin
Copy link
Contributor Author

@Samoed, I computed scores on tasks as in your previous pull request with the addition of models (#1319)

Classification

source AmazonCounterfactualClassification EmotionClassification ToxicConversationsClassification
Linq-Embed-Mistral Leaderboard 84.43 51.82 71.29
Linq-Embed-Mistral Pull request 84.94 56.45 71.82
NV-Embed-v1 Leaderboard 95.12 91.7 92.6
NV-Embed-v1 Pull request 71.03 79.26 78.96
NV-Embed-v2 Leaderboard 94.28 93.38 92.74
NV-Embed-v2 Pull request 79.28 64.79 76.3
gte-Qwen1.5-7B-instruct Leaderboard 83.16 54.53 78.75
gte-Qwen1.5-7B-instruct Pull request 81.78 54.91 77.25
gte-Qwen2-1.5B-instruct Leaderboard 83.99 61.37 82.66
gte-Qwen2-1.5B-instruct Pull request 82.42 65.66 84.54

Clustering

source ArxivClusteringS2S RedditClustering
Linq-Embed-Mistral Leaderboard 47.3 61.52
Linq-Embed-Mistral Pull request 47.61 60.94
NV-Embed-v1 Leaderboard 49.59 63.2
NV-Embed-v1 Pull request 48.31 52.29
NV-Embed-v2 Leaderboard 51.26 71.1
NV-Embed-v2 Pull request 46.98 55.58
gte-Qwen1.5-7B-instruct Leaderboard 51.45 73.37
gte-Qwen1.5-7B-instruct Pull request 53.57 80.12
gte-Qwen2-1.5B-instruct Leaderboard 45.01 55.82
gte-Qwen2-1.5B-instruct Pull request 44.61 51.36

PairClassification

source SprintDuplicateQuestions TwitterSemEval2015
Linq-Embed-Mistral Leaderboard 96.11 81.52
Linq-Embed-Mistral Pull request 94.66 77.09
NV-Embed-v1 Leaderboard 95.94 79
NV-Embed-v1 Pull request 95.93 71.6
NV-Embed-v2 Leaderboard 97.02 81.11
NV-Embed-v2 Pull request 96.99 73.33
gte-Qwen1.5-7B-instruct Leaderboard 96.07 79.36
gte-Qwen1.5-7B-instruct Pull request 20.53 37.15
gte-Qwen2-1.5B-instruct Leaderboard 95.32 79.64
gte-Qwen2-1.5B-instruct Pull request 29.5 42.26

Reranking

source SciDocsRR AskUbuntuDupQuestions
Linq-Embed-Mistral Leaderboard 86.4 66.82
Linq-Embed-Mistral Pull request 84.52 62.36
NV-Embed-v1 Leaderboard 87.26 67.5
NV-Embed-v1 Pull request 86.29 65.27
NV-Embed-v2 Leaderboard 87.59 67.46
NV-Embed-v2 Pull request 85.45 64.94
gte-Qwen1.5-7B-instruct Leaderboard 87.89 66
gte-Qwen1.5-7B-instruct Pull request 57.67 45.19
gte-Qwen2-1.5B-instruct Leaderboard 86.52 64.55
gte-Qwen2-1.5B-instruct Pull request 67.05 48.91

Retrieval

source SCIDOCS SciFact
Linq-Embed-Mistral Leaderboard 21.93 78.32
Linq-Embed-Mistral Pull request 22.08 78.32
NV-Embed-v1 Leaderboard 20.19 78.43
NV-Embed-v1 Pull request 20.07 78.13
NV-Embed-v2 Leaderboard 21.9 80.13
NV-Embed-v2 Pull request 21.67 80.11
gte-Qwen1.5-7B-instruct Leaderboard 27.69 75.31
gte-Qwen1.5-7B-instruct Pull request 26.34 75.8
gte-Qwen2-1.5B-instruct Leaderboard 24.98 78.44
gte-Qwen2-1.5B-instruct Pull request 23.41 77.47

STS

source STS16 STSBenchmark
Linq-Embed-Mistral Leaderboard 87.37 88.81
Linq-Embed-Mistral Pull request 87.25 88.66
NV-Embed-v1 Leaderboard 84.77 86.14
NV-Embed-v1 Pull request 78.2 80.25
NV-Embed-v2 Leaderboard 86.77 88.41
NV-Embed-v2 Pull request 82.79 83.56
gte-Qwen1.5-7B-instruct Leaderboard 86.39 87.35
gte-Qwen1.5-7B-instruct Pull request 85.98 86.86
gte-Qwen2-1.5B-instruct Leaderboard 85.45 86.38
gte-Qwen2-1.5B-instruct Pull request 84.71 84.71

Summarization

source SummEval
Linq-Embed-Mistral Leaderboard 30.98
Linq-Embed-Mistral Pull request 30.39
NV-Embed-v1 Leaderboard 31.2
NV-Embed-v1 Pull request 29.37
NV-Embed-v2 Leaderboard 30.7
NV-Embed-v2 Pull request 30.42
gte-Qwen1.5-7B-instruct Leaderboard 31.46
gte-Qwen1.5-7B-instruct Pull request 31.22
gte-Qwen2-1.5B-instruct Leaderboard 31.17
gte-Qwen2-1.5B-instruct Pull request 30.5

I see a big difference in the results for the gte-Qwen models. Perhaps this is related to the prompts, gte uses prompt only for query, but in MTEB prompt is used for both query and document. Of the models I'm adding, the Linq-Embed-Mistral model has the least difference in scores. I think it can be merged without any changes. For other models, I think it is worth checking what the results are when you use the prompt only for query. @Samoed, what do you think?

@Samoed
Copy link
Collaborator

Samoed commented Nov 16, 2024

Great! The results on classification for NV-Embed show a significant gap as well. I think a wrapper can be created for gte-Qwen to add instructions only to the query. However, it's a bit strange that prompts seem to make performance worse on PairClassification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants