You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am successfully running both the mini and mega models but when I try to load Mega_full with python3 app.py --port 8080 --model_version Mega_full it just gets stuck and never makes it past:
"--> Starting DALL-E Server. This might take up to two minutes."
I just left it running for more than 30 minutes but nothing happens, the other models usually load in less than 2 minutes.
I am running it in an aws g4dn.2xlarge equipped with a Tesla T4.
Is somebody dealing with the same issue?
The text was updated successfully, but these errors were encountered:
Are you sure it's not loading? The verbosity is quite low for this and doesn't actually say what it's doing for much of it; like there's a short delay before it actually starts to load the model; loading the model involves quite a bit of data downloading and unpacking which is pretty much silent - you'll see the RAM usage slowly creep up for a bit, before there's a big dump to VRAM. Afterwards before the server is successfully started it does one warm-up generation. This is all included in the "This might take up to two minutes".
On my local systems I've had a couple where it would take upwards of an hour to load the first time, and it's pretty silent the entire time and doesn't seem to be doing much of anything. I'm not really sure where the bottleneck is, though while I don't have a fast GPU I do have decently fast internet/CPU/RAM/Storage which should all speed up the initial load significantly, though Mega_Full can still take forever without some serious tweaks to the code.
Hi there,
I am successfully running both the mini and mega models but when I try to load Mega_full with
python3 app.py --port 8080 --model_version Mega_full
it just gets stuck and never makes it past:"--> Starting DALL-E Server. This might take up to two minutes."
I just left it running for more than 30 minutes but nothing happens, the other models usually load in less than 2 minutes.
I am running it in an aws g4dn.2xlarge equipped with a Tesla T4.
Is somebody dealing with the same issue?
The text was updated successfully, but these errors were encountered: