Hi asd,
Thanks for the information!
The fix is targeted on the next patch release (4.2.2).
All the best,
Alex
Thank you for update.
I tried to profile how 4k events per one resource works with scheduler 3.1.9 release. On my machine it took about 6s, from start to view reacting to mouse wheel (i.e. was responsive). with 4.2.1 release same data is processed in about 10 seconds. Performance tickets which I opened above should help to reduce that time back to original level.
I also noticed when I load 4k events to 3.1.9 I get the same problem with page responsiveness, because all 4k events are rendered in both versions. Version 4.2.1 is meant to be faster with DOM, because it uses some techniques to limit amount of rendered events:
It doesn't work when all events are on the same time and assigned to a single resource. But if those are spread in time and divided by multiple resources, version 4.2.1 should render far less events.
This ticket is meant to account for the case with too much events per resource: https://github.com/bryntum/support/issues/3146
The 9000 events are not assigned to 1 resource it is divided among approx 300 resources.
I also tried dividing those 4k events between multiple resources (40 events per resource). I get about 2s for 3.1.9 until page is responsive, compared to 5s for 4.2.1. Half of those 5s is spent in the scheduling engine. I've opened another performance ticket to investigate that too: https://github.com/bryntum/support/issues/3154
Did I cover all the problem you have experienced here?
I should point out that in 4.2.0 scheduler got new feature enabled by default which could affect scrolling performance for huge datasets: https://bryntum.com/docs/scheduler/#Scheduler/feature/StickyEvents You can try disabling that to increase scroll performance:
features : {
stickyEvents : false
}
Maxim Gorkovsky wrote: ↑Thu Jul 08, 2021 9:29 amThe 9000 events are not assigned to 1 resource it is divided among approx 300 resources.
I also tried dividing those 4k events between multiple resources (40 events per resource). I get about 2s for 3.1.9 until page is responsive, compared to 5s for 4.2.1. Half of those 5s is spent in the scheduling engine. I've opened another performance ticket to investigate that too: https://github.com/bryntum/support/issues/3154
Thanks for the analysis, as you mentioned the time taken is 5s for 4k events, I assume for 9k events the time taken increases exponentially as the event size increases.Might be this is causing the problem in our use case and the further requests are failing.
As we are blocked with performance and customers facing this issue in production environment, we are looking forward for the performance issue fixed and patched in the next version at the earliest.
we already have set stickyEvents as false.
Thanks,
Abhay
I assume for 9k events the time taken increases exponentially as the event size increases
Time increase is linear for data processing and little bit worse for rendering, more like n log n. Current measurement (100 events per resource) shows 6s for 4k events and 18s for 9k events.
As we are blocked with performance and customers facing this issue in production environment, we are looking forward for the performance issue fixed and patched in the next version at the earliest.
I would like to make sure that we're talking about same performance issue and set priorities correctly. Problems are a bit different for two cases:
Earlier you mentioned that your data look more like in in the case 2:
The 9000 events are not assigned to 1 resource it is divided among approx 300 resources.
Which means we should prioritize engine optimization to rendering optimization. Did I understand you correctly?
We took the latest 4.2.2 and tested with our data set, Following are the observations
I tried loading 3 sets of 9k events, both assigned to a single resource and to 100 different resources. Time growth is linear and worst case scenario spends most of its time laying out 27k events in a single resource.
There is a problem when you replace the data though, i.e. you call add and pass a dataset with ids that already exist in the store. When store finds such record, it rebuilds indices which is indeed a slow process. But if you have no id duplicates, it should work just fine.
We've opened another ticket about replacing the data: https://github.com/bryntum/support/issues/3219
Could you please provide more info about the failing usecase? It'd be good to have a glance on your data set, or a generator function that imitates your dataset and approach you take to add it to the store. For instance this is what I use to test adding events to the store:
// scheduler/examples/basic/app.js
const scheduler = new Scheduler({
appendTo : 'container',
crudManager : {
autoLoad : true,
transport : {
load : {
url : 'data.json' // this dataset is only single resource record
}
}
},
features : { stickyEvents : false },
startDate : new Date(2017, 1, 7, 8),
endDate : new Date(2017, 1, 7, 22),
tbar : [
{
type : 'button',
icon : 'b-icon b-icon-trash',
color : 'b-red',
text : 'Add events',
onAction : () => {
const eventARR = [];
for (let i = 1; i <= 9000; i++) {
eventARR.push({
// id : i,
resourceId : 'a',
name : 'Team scrum ' + i,
startDate : '2017-02-07 11:00',
endDate : '2017-02-07 13:00',
location : 'Some office',
eventType : 'Meeting',
eventColor : '#ff0000',
iconCls : 'b-fa b-fa-users'
});
}
scheduler.eventStore.add(eventARR);
}
},
{
type : 'button',
icon : 'b-icon b-icon-trash',
color : 'b-red',
text : 'Add events (100 per resource)',
onAction : () => {
const eventARR = [];
for (let i = 0; i < 9000; i++) {
eventARR.push({
// id : i,
resourceId : Math.floor(i / 100) + 1,
name : 'Team scrum ' + i,
startDate : '2017-02-07 11:00',
endDate : '2017-02-07 13:00',
location : 'Some office',
eventType : 'Meeting',
eventColor : '#ff0000',
iconCls : 'b-fa b-fa-users'
});
}
scheduler.eventStore.add(eventARR);
}
},
{
type : 'button',
icon : 'b-icon b-icon-trash',
color : 'b-red',
text : 'Add 100 res',
onAction : () => {
const resouARR = [];
for (let i = 1; i <= 100; i++) {
resouARR.push({ id : i, name : `Resource ${i}` });
}
scheduler.resourceStore.add(resouARR);
}
}
]
});
Maxim Gorkovsky wrote: ↑Mon Jul 26, 2021 3:12 pmThere is a problem when you replace the data though, i.e. you call add and pass a dataset with ids that already exist in the store. When store finds such record, it rebuilds indices which is indeed a slow process. But if you have no id duplicates, it should work just fine.
We've opened another ticket about replacing the data: https://github.com/bryntum/support/issues/3219
Yes, as mentioned we are passing the dataset with ids already present in the store. The dataset passed was same with 3.1.9 version. We will wait to take the latest and re-test this scenario when the respective ticket is resolved with 4.2.3.
Thanks,
Abhay