I am trying to initialize pivot with large data set around 1000 * 500 cells. For this I am sending large json object from the server. But in this situation browser is getting completely free zed and going in not responding mode.
How to handle this situation? That is the problem with all other frameworks in the market, but I see webix claiming to render 10,000 columns. So in the case where large data is coming from server side, how to make the rendering seamless.
There are 3 points where performance problem can occurs
- data loading
500k records will result in few megabytes of json data, so based on connection speed it will take some time
- data parsing
before rendering data need to be parsed from string to object model. while datatable ( and pivot ) can use multiple formats of data it is better to avoid XML and HTML data sources for big dataset. JSON is the best choice.
- data processing
math which is applied to form the data. Locally it takes about 5 seconds for 500k dataset, but if you are using complex math it may take a longer time
After that ui will render data, but that is actually the simplest part. The rendering speed doesn’t related to the size of data, so it can be slow only in extreme cases ( more than 10 000 columns )
Place the next code after pivot initialization to get the details about time for different stages
console.time("data loading");
$$("pivot").data.attachEvent("onParse", function(){ console.timeEnd("data loading"); console.time("data parsing"); });
$$("pivot").data.attachEvent("onStoreLoad", function(){ console.timeEnd("data parsing"); console.time("data processing"); });
$$("pivot").$$("data").attachEvent("onBeforeRender", function(){ if (this.count()) { c.timeEnd("data processing"); c.time("data rendering"); } });
$$("pivot").$$("data").attachEvent("onAfterRender", function(){ if (this.count()) webix.delay(function(){ c.timeEnd("data rendering"); }); });
So are you suggesting, JSON approach is the right approach ? Isn’t there any other approach where we don’t have to sent whole pivot data initially.
This approach could be like only data that is visible at one time is fetched for the server, every-time user scrolls the data new server hit is made to fetch new data.
I guess this approach will solve both the problem a. complete data size and b. large data parsing problem, which I guess the main root cause of browser getting in not responding mode.
The datatable itself can load data dynamically, it woks as you have described - while scrolling component sends request to server for extra data.
But pivot need ALL data on client side. To compute sum of values for some parameters it need to iterate through all dataset.
There are two approaches :
- handle all calculations on server side ( in such case you need not pivot, just datatable )
- handle calculations on client side ( in such case you need to load all data )
I don’t want to perform any operation on the data. I tried to use data table earlier for this approach. But the complex part I see in that was headers. I would have 3 or 4 level headers with multi span concept and this header too will be repeated as generated in case of pivot. How to handle this large repeated header with colspan concept. I see data table supports this col span concept but I guess with that json structure it would be difficult to create such huge header having 1500 columns… What do you suggest on this ?
Datatable does support colspans, and count of columns doesn’t matter ( as pivot uses the same colspan feature of the datatable, for header generating ).
The only thing which is may be a bit troublesome - for 1500 columns you need a really huge column configuration, which is painfull to do manually. But if headers are periodic - you can use a js code to form the structure ( few nested loops, which will fill columns and headers arrays )