Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Very inefficient inference #7

Open
n00mkrad opened this issue Aug 25, 2021 · 1 comment
Open

Very inefficient inference #7

n00mkrad opened this issue Aug 25, 2021 · 1 comment

Comments

@n00mkrad
Copy link

Hello, the inference code seems to have rather severe bottlenecks - The CUDA usage is only around 25%.

image

RIFE and other interpolation networks usually have a usage of 80-95%.

Are any optimizations planned to reduce this overhead?

@hjSim
Copy link
Collaborator

hjSim commented Aug 27, 2021

Hi. @n00mkrad
How was the inference time in your experiment compared to the inference time reported in our paper? Was it valid level?

Since every interpolated frame is saved during test phase, CUDA usage could be lower during test phase than training due to data input/output process.
In addition, XVFI-Net has light weight parameters so that data I/O process is not negligible, which may also reduce the CUDA usage (We have experienced this, even data pipeline and the network feedforward are in parallel).

Although we have not planned to reduce the overhead, we will consider your suggestions if you suggest a possible way to do so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants