-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RandomGridStack/DRISEStack out of memory for > 200 masks #140
Comments
Is there no GPU support available with this implementation? Please let me know if there's any other way to generate more masks without OOM error. Thanks! |
Hi @aagrawal357! Sorry for your troubles. I will have to give this a go again on my development machine to regain an understanding of the resource needs for that example. If I recall correctly, a major influence on RAM consumed is the pixel size of the input image. Are you using the image showcased in the notebook? If you happen to not be, maybe try out a smaller image or even a crop of an AOI (area of interest) from your original image. As an aside, I would suggest trying to not perform anything that would enter your swap space: even with an SSD, the speed of processing when memory enters swap can be so drastically diminished it becomes infeasible to finish. As for GPU utilization, that would only come into the picture with respect to your inferencing model, not specifically the |
Hi @Purg, Thank you for your response! You were right about the pixel size of the input image. I was using a different image than the notebook and it had a pixel size of 2688 × 1520. I was able to run both DRISE and RandomGrid with >200 masks when I tried with an image of pixel size 640x480. Can I ask you a couple of follow-up questions?:
Thank you for your help! I appreciate it. |
@Purg Good morning! I would really appreciate your response on my questions above. Thank you! |
Hi @aagrawal357, sorry for the delayed response! Glad you were able to get DRISE and RandomGrid to run when reducing the image size. In regards to your two questions:
I do not believe there is any fundamental relation between object size and number of masks. However, the grid cell size might need to be adjusted to better account for smaller/larger objects (e.g. you might use
FYI, we ran into a similar issue as you when trying to apply black-box detection saliency to images/datasets with small objects. As a result, we created the RandomGridStack saliency method, which instead of partitioning the image into equally sized cells (as done by DRISE), fixes the absolute size of each grid cell. This might be useful if you know something about the average size of objects in your dataset (in pixels), and want to ensure your masks are smaller than this so that you can generate object-level saliency. You can see our example notebook here, which demos saliency map computation on aerial drone imagery from the VisDrone dataset (see the "Generate Saliency Hope this helps! |
Hi,
I followed example notebooks ModelComparisonWithSaliency.ipynb and DRISE.ipynb and was able to make the RandomGrid and DRISE work for 200 masks. DRISE seems to work much better than RandomGrid for 200 masks. As per your instructions in the RandomGrid notebook, I am trying more masks (1200) for RandomGrid but the system runs out of memory very quickly and the process fails. Would you know why? I have 16 GB RAM and 16 GB swap space. Is there any other way to run it without failing and without increasing the RAM size? Let me know if you need any more information around this issue. I would really appreciate your help with this!
Thank you!
The text was updated successfully, but these errors were encountered: