-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: adds ggml_pad_reflect_1d
#850
base: master
Are you sure you want to change the base?
Conversation
It looks great, @balisujohn. Thanks for starting the implementation! I'm wondering whether we should implement a |
I added a test in I think this is ready for review. |
A test I wasn't sure how to add was "check if it correctly fails when called with a pad length not shorter than ne0 of the input tensor." |
Or and wrt the |
Some similar comments as in the Regarding the |
…ad is shorter than existing padded dimension
dce272a
to
120ba25
Compare
OK ready for review again : ^) |
Hello @slaren @ggerganov ! Do you plan to merge this operation? This would be super helpful for Encodec.cpp to support the Metal backend. |
Hey @PABannier, this PR needs to be updated to the latest |
Hello @balisujohn ! Do you plan to finish this PR soon? Otherwise, I'm happy to finish yours if you give me the permission to push on your fork. |
This adds an op that mirrors the behavior or PyTorch's ReflectionPad1d operation.
Implementations for cuda and CPU are provided.
Tests still need to be added, so marking as a draft PR from now.
The cpu version of the op was derived with permission from @PABannier's implementation discussed here: #819