Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MIEB] feat: add jina-clip-v2 to MIEB #1435

Draft
wants to merge 1 commit into
base: mieb
Choose a base branch
from

Conversation

bwanglzu
Copy link
Contributor

@bwanglzu bwanglzu commented Nov 11, 2024

NOTE, this is a draft PR since the model hasn't been released yet.

Add our latest model jina-clip-v2 to MIEB, similar as jina-clip-v1, aims at train on text retrieval and image-caption retrieval tasks, while it features several new characteristics:

  1. It supports 89 languages, become a multilingual multi-modal embedding model.
  2. it supports MRL (truncate dimensions).
  3. It has basic capability of understanding documents (visually rich document retrieval), reaches similar performance as So400m/Bipali on ViDoRE and our internal benchmark, the image tower could be used as a visual backbone for VLLM training.

Compared to jina-clip-v1, it started to use a prompt when encoding queries to improve text retrieval performance, so i added an additional parameter task (could be discussed) similar as jina-embeddings-v3.

Checklist

  • Run tests locally to make sure nothing is broken using make test.
  • Run the formatter to format the code using make lint.

Adding datasets checklist

Reason for dataset addition: ...

  • I have run the following models on the task (adding the results to the pr). These can be run using the mteb -m {model_name} -t {task_name} command.
    • sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
    • intfloat/multilingual-e5-small
  • I have checked that the performance is neither trivial (both models gain close to perfect scores) nor random (both models gain close to random scores).
  • If the dataset is too big (e.g. >2048 examples), considering using self.stratified_subsampling() under dataset_transform()
  • I have filled out the metadata object in the dataset file (find documentation on it here).
  • Run tests locally to make sure nothing is broken using make test.
  • Run the formatter to format the code using make lint.

Adding a model checklist

  • I have filled out the ModelMeta object to the extent possible
  • I have ensured that my model can be loaded using
    • mteb.get_model(model_name, revision) and
    • mteb.get_model_meta(model_name, revision)
  • I have tested the implementation works on a representative set of tasks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant