Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JWT cache is not purged? #3788

Open
steve-chavez opened this issue Nov 16, 2024 · 8 comments · May be fixed by #3801
Open

JWT cache is not purged? #3788

steve-chavez opened this issue Nov 16, 2024 · 8 comments · May be fixed by #3801
Assignees
Labels

Comments

@steve-chavez
Copy link
Member

steve-chavez commented Nov 16, 2024

Problem

The Data.Cache says:

Note that items are not purged automatically in the background when they expire. You have to manually call lookup to purge a single item, or call purgeExpired to purge all expired items.

But purgeExpired nor purge are called anywhere on the Auth module.

@steve-chavez
Copy link
Member Author

steve-chavez commented Nov 16, 2024

I've gotten a report that memory usage keeps increasing and is never purged but I wasn't able to reproduce this. I tried:

PGRST_JWT_CACHE_MAX_LIFETIME=30000 postgrest-with-postgresql-16  -f test/spec/fixtures/load.sql postgrest-run

And monitoring the memory usage with:

while ps -p $(pgrep postgrest | tail -n 1) --no-headers --format "etime %mem rss"; do sleep 5; done

While repeatedly doing:

curl localhost:3000/authors_only -H "Authorization: Bearer $(postgrest-gen-jwt --exp 100 postgrest_test_author)"

This in theory should show postgREST process consuming more memory but it doesn't somehow.

@steve-chavez steve-chavez added needs-repro pending reproduction and removed bug labels Nov 16, 2024
@steve-chavez
Copy link
Member Author

steve-chavez commented Nov 16, 2024

Another option would be to use the postgrest-profiled-run nix command we have. Then add a test on test/memory/memory-tests.sh to ensure this doesn't happen agian.

@taimoorzaeem taimoorzaeem self-assigned this Nov 17, 2024
@taimoorzaeem
Copy link
Collaborator

Another way could be that we can log the cache size using the size function from Data.Cache.

@taimoorzaeem
Copy link
Collaborator

I tried tracing the cache size by adding this trace in App.hs:

body <- lift $ Wai.strictRequestBody req

trace (show (unsafePerformIO $ C.size (getJwtCache appState))) (return ()) -- here
 
let jwtTime = if configServerTimingEnabled then Auth.getJwtDur req else Nothing

This showed a trace of the number of cache entries in the cache every time I run this:

curl localhost:3000/authors_only -H "Authorization: Bearer $(postgrest-gen-jwt --exp 10 postgrest_test_author)"

The trace showed an increasing number of entries even after the tokens are expired.

1
2
.
.
30
31 # increases forever

I am not sure if using unsafePerformIO is a reliable way to reproduce this error. However, by calling purgeExpired in Auth.hs:

checkCache <- C.lookup (getJwtCache appState) token
authResult <- maybe parseJwt (pure . Right) checkCache
C.purgeExpired (getJwtCache appState) -- here

remove expired items and this can be confirmed by sending the above curl request again which shows the size reducing after the token is actually expired (10 seconds in this case). @steve-chavez Does this confirm that this is a bug?

@steve-chavez
Copy link
Member Author

@taimoorzaeem Yes, that's a good repro.

Thinking more about how to test this. We could add a pgrst_jwt_cache_size metric, do some authed requests, validate that the size increases, wait for the exp time and then verify the size has reduced or gone to 0.

@taimoorzaeem
Copy link
Collaborator

Thinking more about how to test this. We could add a pgrst_jwt_cache_size metric,

@steve-chavez I guess having a cache metric would not be possible because metrics are initialized before the initialization of AppState. Since, cache is a part of AppState hence we can't directly access cache in the Metrics.hs module. In other words, Metrics.hs can't depend on AppState.hs because AppState.hs depends on Metrics.hs.

Maybe we can test this in io-tests? Also, how about fixing the bug right away and then verify the behavior?

@steve-chavez
Copy link
Member Author

I don't think there's a circular dependency problem there, we already capture Pool metrics for example, and the pool is only contained on AppState. This is one of the reasons the Observation type was added (for indirection), you'd just need to add a new constructor for this case.

Also, how about fixing the bug right away and then verify the behavior?

Sure, go ahead.

@steve-chavez steve-chavez added bug and removed needs-repro pending reproduction labels Nov 19, 2024
@taimoorzaeem
Copy link
Collaborator

I don't think there's a circular dependency problem there, we already capture Pool metrics for example, and the pool is only contained on AppState. This is one of the reasons the Observation type was added (for indirection)

Oh, I understand now. The pgrst_jwt_cache_size metric will be maintained separate from the cache in AppState (although the size value would be same) which is why we have Observation. Thanks for clearing that up.

@taimoorzaeem taimoorzaeem linked a pull request Nov 23, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging a pull request may close this issue.

2 participants