-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The version of FewNERD #42
Comments
Hi @dongguanting, |
Thanks a lot for your reply. I still have a question during testing cross dataset senario. How to set up the script to execute the settings in your paper (2 datasets for training, 1 for valid, 1 for test), does this mean that it need to perform 2 rounds of training process with spans and types of 2 different ner_train.json? |
Hi @dongguanting, not really, in the Cross-Domain dataset, you only need to train once on the training set (Span+Type) and then evaluate it directly. In the training phase, the model can see all task data of both domains. N=1 # 1 or 2 or 3 or 4
K=1 # 1 or 5
...
--dataset Domain \ |
Maybe you wrongly reversed the results of the ACL version and arXiv version in this repo?(f1 of FEW-NERD arxiv version is higher,but in your repo,the ACL version result is higher) |
Hi @liyongqi2002, thanks for the reminder. We have some problems with the presentation of the Few-NERD dataset version. I will fix it as soon as possible. |
Thanks for your reply, so the results that can be compared now are the results of the second table (using the 500MB episodes data, which is also presented in https://paperswithcode.com/sota/few-shot-ner-on-few-nerd-inter), is my understanding correct? |
Yeah, you can compare the results in the second table by using the 500MB episodes data. |
@dongguanting I'm also trying the code but it asks me |
Hi @GenVr, you can download the arxiv v6 version Few-NERD dataset by follow the script in their repo in https://github.com/thunlp/Few-NERD/blob/main/data/download.sh#L20-L22. |
Hi, @iofu728. It seems the open source dataset “episode-data” is the arxiv version of FewNERD? I found that the reproduced results are very different from those in the paper, maybe you use the ACL version of FewNERD in the paper?
The text was updated successfully, but these errors were encountered: