Hi. I wrote a little app that has a search field and a list of items. The items are filtered based on the search field contents, as the user types. If I load it with say 30,000 items, yew's DOM handling seems to slow down enough to be noticeable. I experimented with yew-virtual-scroller to only render a small fraction of the items, but now it turns out constructing an almost-30,000-item Vec on every keypress is also too slow.
<VirtualScroller> only uses .len() and a slice of the visible range
|
(&self.props.items[cw.visible_range.clone()]).into(), |
Could you please consider an API where the items are passed as something that creates new iterators (always starting from the first item, not clones of the one and same iterator), and then call size_hint and skip as appropriate?
I believe it would be reasonably cheap for me to iterate my origin items and filter on the fly, even I couldn't implement nth to speed up the skip. And implementing nth is just a matter of caching, if needed, my data source is well suited to the idea of discarding (parts of) such cache on changes.
Hi. I wrote a little app that has a search field and a list of items. The items are filtered based on the search field contents, as the user types. If I load it with say 30,000 items, yew's DOM handling seems to slow down enough to be noticeable. I experimented with yew-virtual-scroller to only render a small fraction of the items, but now it turns out constructing an almost-30,000-item Vec on every keypress is also too slow.
<VirtualScroller>only uses.len()and a slice of the visible rangeyew-virtual-scroller/src/lib.rs
Line 191 in c26bd46
Could you please consider an API where the items are passed as something that creates new iterators (always starting from the first item, not clones of the one and same iterator), and then call
size_hintandskipas appropriate?I believe it would be reasonably cheap for me to iterate my origin items and filter on the fly, even I couldn't implement
nthto speed up the skip. And implementingnthis just a matter of caching, if needed, my data source is well suited to the idea of discarding (parts of) such cache on changes.