You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The documentation doesn't really deal with how expiration should be handled in Kafka. For example, if all topics are set to expire in 24 hours, new consumers don't have most information to be able to bootstrap. A simple solution is to restart the collector every 24 hour. Maybe a openbmpd could force a rolling disconnect instead? Or maybe I am missing another solution?
The text was updated successfully, but these errors were encountered:
This is a well known issue with stateful data and using time series storage (Kafka). Some folks are trying to use influx or other TSDB to store BGP data, but they of course have the same problem when data is expired because it wasn’t refreshed.
Recommendation is definitely not to restart the collector, router, or peers. That would directly effect the network.
The recommendation is to have a fault tolerant and correctly insync consumer that is always running. This consumer then can sync other consumers with the rib table for which ever peers are of interest to that consumer. The new consumer can get the offset from the root consumer if it wants to consume live updates after sync.
Hey!
The documentation doesn't really deal with how expiration should be handled in Kafka. For example, if all topics are set to expire in 24 hours, new consumers don't have most information to be able to bootstrap. A simple solution is to restart the collector every 24 hour. Maybe a openbmpd could force a rolling disconnect instead? Or maybe I am missing another solution?
The text was updated successfully, but these errors were encountered: