Conversation
|
…mentation cleanup.
| #[derive(Clone, Debug, PartialEq, Eq)] | ||
| pub struct CacheEntry<V> { | ||
| value: V, | ||
| cached_at: Option<SystemTime>, |
There was a problem hiding this comment.
what if we merge cached_at and ttl into cached_until?
There was a problem hiding this comment.
As I said above, cached_at does get set by the cache. But I guess it might be confusing if you can set it here... The reason for being configurable here is in case they want manual control, or they want to be able to test it. In other words, cached_at is never useful without ttl, but ttl is always useful without cached_at. Also, I don't really think exposing cached_at is even useful to consumers - it's really more for internal testing of the expiration functionality. Still, I think what I'll do is have these methods
pub fn new(value: V) -> Self
pub fn expires_after(value: V, ttl: Duration) -> Self
pub fn expires_at(value: V, ttl: Duration, cached_at: SystemTime) -> Self
pub(crate) fn ensure_cached_at(cached_at: SystemTime) // checks for pre-existing cached_at value| #[derive(Clone, Debug, PartialEq, Eq)] | ||
| pub struct CacheEntry<V> { | ||
| value: V, | ||
| cached_at: Option<SystemTime>, |
There was a problem hiding this comment.
SystemTime is not very reliable, how about Instant?
There was a problem hiding this comment.
Instant is limited to in-process manipulations, while this abstraction needs to work with timestamps that are externally stored. For example, when redis entry was stored.
There was a problem hiding this comment.
in that case we should provide a choice to the user because uses cases might be different. There are plenty of cases when cache is used only for process internal data
There was a problem hiding this comment.
It creates API split though. I would argue, that for local data one uses ttl anyway. Btw, would it make more sense to have expires_at field? What capabilities does redis expose for this?
There was a problem hiding this comment.
In Redis you set TTL as a duration in seconds. They don't expose a cached timestamp of any kind. I don't really have any opinion on Instant vs SystemTime, other than that I don't like that SystemTime::duration_since() is fallible.
crates/cachelon/src/telemetry/ext.rs
Outdated
| /// Extension trait for timing async operations. | ||
| pub trait ClockExt { | ||
| /// Times an async operation and returns both the result and elapsed duration. | ||
| fn timed_async<F, R>(&self, f: F) -> impl Future<Output = TimedResult<R>> |
There was a problem hiding this comment.
imho, this should stay internal to the crate. If these APIs are useful enough, we can just add them to tick.
There was a problem hiding this comment.
Yeah, it should have been internal, my mistake. I could see this being useful for the new telemetry crate (@Vaiz), but I'll leave them here until someone else has a use case.
There was a problem hiding this comment.
BTW this was doubling the sizes of the futures, so I added a small struct that implements Future to do the timing, it's much more efficient now, and might be more useful this way.
| S: CacheTier<K, V> + Send + Sync, | ||
| { | ||
| async fn get(&self, key: &K) -> Result<Option<CacheEntry<V>>, Error> { | ||
| let timed = self.clock.timed_async(self.inner.get(key)).await; |
There was a problem hiding this comment.
imho, what's wrong with just having let stopwatch = clock.stopwatch(); and tracking the duration like that? Is extra layer around timed_async necessary here?
There was a problem hiding this comment.
What's wrong with it is I didn't think of it first. I do like timed_async because otherwise I have to write the stopwatch boilerplate in many places; this feels cleaner from the callsite. I do think timed_async might be better using stopwatch in the implementation, though
…lace When variant with WhenBoxed variant. Use stopwatch for timed_async.
Cachelon = Cache + Echelon
TODO more description