You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi. @n00mkrad
How was the inference time in your experiment compared to the inference time reported in our paper? Was it valid level?
Since every interpolated frame is saved during test phase, CUDA usage could be lower during test phase than training due to data input/output process.
In addition, XVFI-Net has light weight parameters so that data I/O process is not negligible, which may also reduce the CUDA usage (We have experienced this, even data pipeline and the network feedforward are in parallel).
Although we have not planned to reduce the overhead, we will consider your suggestions if you suggest a possible way to do so.
Hello, the inference code seems to have rather severe bottlenecks - The CUDA usage is only around 25%.
RIFE and other interpolation networks usually have a usage of 80-95%.
Are any optimizations planned to reduce this overhead?
The text was updated successfully, but these errors were encountered: