Hello all, I’m running into a performance issue on Aurelia 1.3.1
Basically the issue is I have a repeat.for iterating over a list of 9000+ items with 5+ bindings per item. The SPA hangs for ~10 seconds while the html in the repeater renders.
The actual bindings are unimportant to me, I just want to display the data in the arrays, no need to observe changes and update bindings.
I’ve tried using one time binding to no avail (little to no change in performance)
Pagination is not an option (client wants all data to be on one page)
Infinite scrolling is not an option (user should be able to scroll directly to the bottom of the page)
Current Solution (Still not happy with the performance):
A custom repeat strategy which renders the HTML in the repeater in batches.
While this keeps the page responsive, it still takes ~10 seconds for whole table to load.
Potential Solutions:
Delay the bindings until the element containing the bindings is brought within range of the browser’s viewport
Generate the HTML using vanilla JavaScript (but I would like to use Aurelia’s HTML templates and repeat.for)
Generate the HTML using Aurelia’s templating engine, but somehow insert the data without using bindings
Any opinions and suggestions to getting better performance would be greatly appreciated!
Check first the page performance with 9,000+ of same pure HTML elements w/o Aurelia at all. In any event, expecting this huge list in one bulk is poor UX experience and design.
I should also point out that in the krausest benchmark we have 10.000 items (with multiple bindings each) rendering in about 1.5-1.7 seconds. So it ought to be possible to shave quite a bit of time off of those 10 seconds you’re seeing.
Could you share the repeated template you’re using? Perhaps we can spot some low-hanging fruit.
EDIT: having just re-read the title, if those 10 seconds are with 50.000 items then there’s not much you can do besides using a virtual repeater (in which case it would be near instantaneous) or, in the case of v2, time slicing (in which case you would instantly see some results, but rendering the rest will still take some time).
Other than that, 10 seconds is pretty close to the nominal time it takes to render that much html (I believe it could be done in 5 seconds with pure vanilla js but that’s still not a great UX)
For performance, however, best avoid plugins and stay close to the metal.
ag-grid would add a lot of overhead - which wouldn’t be noticeable in normal use cases, but could easily add a few extra seconds here. ag-grid has infinite scroll but OP said that’s not what they wanted.
Virtual repeater is definitely the way to go with these numbers.
Thank you all for your replies. The solution that I am working with now is using a custom Repeater.
I created a repeater that batches the rendering of a repeat.for so it does not hold up the UI while it renders.
I also created a repo to test the performance of rendering many tables in a repeat.for, which you can find here
Some notes:
I could not use the virtual-repeater because the heights of the elements I am repeating are not constant
The performance of the repeat.for on 500 tables with 10 rows and 10 columns each takes ~2s (ranging from 1.5s to 2.5s) to render
I am using custom elements that also use replaceable parts (each table element is a custom component)
I have nested repeat.fors that look something like this repeat.for(100 elements) -> repeat.for(1-5 elements) -> repeat.for(1-5 elements) -> repeat.for(5-1000 elements)
My performance issues could be caused by the heavy usage of replaceable parts for each table element and also the nested repeat.fors
From the numbers, it’s probably the combination of
the number of elements needed to be created upfront
the bindings needed to be created for those elements
We will need a lot of optimization around this area. One strategy is to have a custom mutation strategy or initialization strategy. In that, you would populate the array for repeating with a limited numbers of model objects (say 20 for example), and repeat after a few intervals to complete the rendering. It’s chunking the data that should help
In the non-chunking mode, there’s a long period of UI work from browser, and little to no work from scripting. I’m not sure why but I suspect there’s some heavy layouting triggered by CSS pseudo elements going on, maybe that’s why it’s slow?