You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering about the way you are saving the data on dump. I am aware Redis uses AOF and snapshots to dump a database to a file. I can see in your code that you use only AOF, but maybe I am missing something. How do you deal when there are large amounts of data that have to be dumped? Do you perhaps save the database as a whole in a snapshot way as Redis does?
The text was updated successfully, but these errors were encountered:
So if I get it right, the AOF/file always starts from 0 and then adds up on each change of some data? So after 10 years of append, will the file not be huge? And how about when you restart the database, does it have to start reading that huge file from beginning to end? Which for 10 years could take some time to start?
Hello,
I was wondering about the way you are saving the data on dump. I am aware Redis uses AOF and snapshots to dump a database to a file. I can see in your code that you use only AOF, but maybe I am missing something. How do you deal when there are large amounts of data that have to be dumped? Do you perhaps save the database as a whole in a snapshot way as Redis does?
The text was updated successfully, but these errors were encountered: