Skip to content

Allow supplying context to cache methods#490

Open
merlinfuchs wants to merge 10 commits intodisgoorg:masterfrom
merlinfuchs:cache-context
Open

Allow supplying context to cache methods#490
merlinfuchs wants to merge 10 commits intodisgoorg:masterfrom
merlinfuchs:cache-context

Conversation

@merlinfuchs
Copy link
Copy Markdown
Contributor

When implementing a custom cache implementation that is backed by some IO bound resource (e.g. a database) it would be nice to have the option to supply a context.Context to each request. The default cache implementation can completely ignore the context as it's not IO bound.

This isn't a breaking change as the options are optional. Only users that already have a custom cache implementation would have to update it to match the new interface.

Comment thread cache/request_config.go Outdated

import "context"

type requestConfig struct {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure about the naming. I guess these are cache "requests" after all so it makes sense?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I am a fan of request, maybe access works better?
ideally it would just be config but that's already taken lol

Copy link
Copy Markdown
Member

@topi314 topi314 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the idea here is very much valid.
But I think even the default cache could also make use of them via https://github.com/sasha-s/go-csync
That package already is a dependency so swapping out all the mutexes for that one should be easy enough

@merlinfuchs
Copy link
Copy Markdown
Contributor Author

merlinfuchs commented Nov 12, 2025

But I think even the default cache could also make use of them via https://github.com/sasha-s/go-csync

I thought about this but wasn't sure if the performance implications are worth it. According to the benchmarks it's only ~10% slower, but I'm still not sure if it's worth it. The locks in the default cache implementations should really never stay locked for longer times ...

Happy to add it if you think it makes sense tho!

@topi314
Copy link
Copy Markdown
Member

topi314 commented Nov 12, 2025

But I think even the default cache could also make use of them via sasha-s/go-csync

I thought about this but wasn't sure if the performance implications are worth it. According to the benchmarks it's only ~10% slower, but I'm still not sure if it's worth it. The contexts in the default cache implementations should really never stay locked for longer times ...

Happy to add it if you think it makes sense tho!

the iterator methods block the mutex for the duration of the iterator whhich can lead to gateway disconnects, so maybe working around that with a default timeout on all internal cache access could help. The only issue with that is that we don't return any error rn which could convey that the context timed out hmm

@merlinfuchs
Copy link
Copy Markdown
Contributor Author

Returning an error from all the cache methods is also something that would be nice for IO bound implementations 🤔 I guess that might require quite some refactoring in other places tho

@merlinfuchs
Copy link
Copy Markdown
Contributor Author

the iterator methods block the mutex for the duration of the iterator whhich can lead to gateway disconnects, so maybe working around that with a default timeout on all internal cache access could help

That's an interesting point. I guess there is no way around that when using iterators, but it seems dangerous, especially if the user isn't aware of it 🤔

@topi314
Copy link
Copy Markdown
Member

topi314 commented Nov 12, 2025

the iterator methods block the mutex for the duration of the iterator whhich can lead to gateway disconnects, so maybe working around that with a default timeout on all internal cache access could help

That's an interesting point. I guess there is no way around that when using iterators, but it seems dangerous, especially if the user isn't aware of it 🤔

documentation should definitely get better around those gotchas lol.

depending on your usecase it might just be better to skip the cache interface tbh. Maybe a remote cache is one of those cases.

I don't think it makes much sense to add a context without an error

Comment on lines +115 to +124
if err := client.Caches.AddThreadMember(addedMember.ThreadMember); err != nil {
client.Logger.Error("failed to add thread member to cache", slog.Any("err", err), slog.String("thread_id", event.ID.String()), slog.String("user_id", addedMember.UserID.String()))
}
if err := client.Caches.AddMember(addedMember.Member); err != nil {
client.Logger.Error("failed to add member to cache", slog.Any("err", err), slog.String("guild_id", event.GuildID.String()), slog.String("user_id", addedMember.UserID.String()))
}

if addedMember.Presence != nil {
client.Caches.AddPresence(*addedMember.Presence)
if err := client.Caches.AddPresence(*addedMember.Presence); err != nil {
client.Logger.Error("failed to add presence to cache", slog.Any("err", err), slog.String("guild_id", event.GuildID.String()), slog.String("user_id", addedMember.UserID.String()))
Copy link
Copy Markdown
Contributor Author

@merlinfuchs merlinfuchs Dec 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't very pretty ... There is no way to recover from cache errors here because the whole library is built on top of the assumptions that cache operations never fail ...
Seeing how much noise this already adds to the code I have to wonder if it's actually worth it or if there is a better approach.

Maybe it would be an option to register a central error handler that is called for any cache error so the other parts of the code don't have to handle it? 🤔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Development

Successfully merging this pull request may close these issues.

2 participants