You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We currently have a problem migrating the schema of a RavenDB database via a .NET WebAPI service running in Kubernetes using the RavenMigrations package. While the migrations are running, the service doesn't respond to the Kubernetes liveness check. Kubelet then kills the container, causing the migrations to stop mid-way and put the database in a broken state. After some research a couple ideas came up:
We can increase the time that it takes for Kubernetes to check the liveness of the container. However, that's not ideal, as we don't know ahead of time how large the database is and therefore how long the migrations will take, not to mention that the size of the worker node that the service runs on could potentially vary, leading to increased processing time.
We can run the migrations in an init container. While there's generally nothing speaking against it, it would be a considerable time investment to get it up and running with refactoring, etc.
Does anyone have experience with running RavenDB migrations from a client running in Kubernetes who could advise us on the most robust solution? More specifically, we would need the service to start up and immediately respond to the liveness endpoint, afterwards kicking off the migrations. Once the migrations have successfully finished, the service should then start responding to the readiness endpoint. This would allow us to be robust to any size migration.
Thanks for any advice
The text was updated successfully, but these errors were encountered:
Hi community,
We currently have a problem migrating the schema of a RavenDB database via a .NET WebAPI service running in Kubernetes using the RavenMigrations package. While the migrations are running, the service doesn't respond to the Kubernetes liveness check. Kubelet then kills the container, causing the migrations to stop mid-way and put the database in a broken state. After some research a couple ideas came up:
We can increase the time that it takes for Kubernetes to check the liveness of the container. However, that's not ideal, as we don't know ahead of time how large the database is and therefore how long the migrations will take, not to mention that the size of the worker node that the service runs on could potentially vary, leading to increased processing time.
We can run the migrations in an init container. While there's generally nothing speaking against it, it would be a considerable time investment to get it up and running with refactoring, etc.
Does anyone have experience with running RavenDB migrations from a client running in Kubernetes who could advise us on the most robust solution? More specifically, we would need the service to start up and immediately respond to the liveness endpoint, afterwards kicking off the migrations. Once the migrations have successfully finished, the service should then start responding to the readiness endpoint. This would allow us to be robust to any size migration.
Thanks for any advice
The text was updated successfully, but these errors were encountered: