Pivot performance problem

Hi there!
Please, check this example: https://snippet.webix.com/a9yy0831
As you can see, it takes a lot of time to load. Which surprises me, because the dataset has only 10k records. For example, this demo from your docs uses dataset with 148702 records, and still works pretty fast.
I also tried to use your web worker, but with no success. I end up having “Data cannot be cloned, out of memory” error message. (you can uncomment webworker-related line and see for yourself).
Is there any way to avoid this problem? To improve performance somehow? Maybe there is a way to display only N data records initially, and add all the rest data records on some user interaction?

Hi there again webix team
Please, respond when you have time.
Thank you

Hi there webix team
Any ideas concerning my request? Thank you

Hello @vitaliy_kretsul,
Sorry for the delay.

As far as I can see, “data processing” takes up the most time:

data loading: 0.265869140625 ms
data parsing: 21.98388671875 ms
data processing: 111673.06616210938 ms
data rendering: 7718.9228515625 ms

Please check next example:
https://snippet.webix.com/hdmzbnbm

Maybe there is a way to display only N data records initially, and add all the rest data records on some user interaction?

There is no way to use dynamical loading ( it has no sense , as pivot need to have all data for calculations). If you need to show data, without real pivoting, just use a webix datatable, which does support dynamical loading and paging.
As alternative solution, all calculations can be done using some script on the server, and simply load the grouped data and structure settings into the pivot. Then it will look and work the same way, just there will be no calculations on the client side. This feature is called external data processing. However, this is still not a dynamic load (such data is also loaded at once), but it frees the client part from the load in the calculations.

Hello @annazankevich
Many thanks for your response.
Can you help me to understand why exactly this dataset takes so much time for data processing? Especially taking into account that other dataset with almost 15 times more data lines loads pretty fast:
https://snippet.webix.com/j7s3nmxv

Also, why exactly the webworker throws an error?

Thanks in advance!

@vitaliy_kretsul ,

Can you help me to understand why exactly this dataset takes so much time for data processing?
Most likely, it’s the structure, namely, the calculations with rows: ["LastAccessTime"], if we change the structure, everything is ok.

Also, why exactly the webworker throws an error?

I think that error occurs because of the amount of data. Fo example, if we reduce to 7k, there will be no such error: Code Snippet

Unfortunately, as mentioned above, the only solution for such case is to do all of the necessary data aggregation/calculations on the server-side and then process the grouped result on the client.