Hello Pavel,
thank you for the update and taking the time to look into this. I'll follow this space.
Cheers,
Renato
Support Forum
Hello Renato,
the problem here is that the eventStore data is updated by assignment statement. The store has been designed to primarily load, update, create and delete data from a server. Therefore, currently it is necessary to call before each assignment statement that replaces the complete data of the store.
Just a side note: Generally, it is not a good practice to replace the whole dataset as use may have edited the old one in the view by dragging, resizing, etc. and replacing would then lead to data loss. However, there may be some rare use-cases when it is needed, such as yours. For such, removing the old records as above does the trick.
the problem here is that the eventStore data is updated by assignment statement. The store has been designed to primarily load, update, create and delete data from a server. Therefore, currently it is necessary to call
eventStore.removeAll(false)
Just a side note: Generally, it is not a good practice to replace the whole dataset as use may have edited the old one in the view by dragging, resizing, etc. and replacing would then lead to data loss. However, there may be some rare use-cases when it is needed, such as yours. For such, removing the old records as above does the trick.
Hello Saki,
thank you, this fixed it!
I am aware of the data loss as I really want to replace the whole collection. The user interface contains a select element which determines which dataset should be visible in the scheduler, therefore whole store replacement is needed.
Do I have an alternative approach to this with bryntum scheduler? Can I use another store type that it's meant only for a local update, not used as a middleware for network CRUD?
Can you tell me how efficient is the call of removal on the event store?
thank you, this fixed it!
I am aware of the data loss as I really want to replace the whole collection. The user interface contains a select element which determines which dataset should be visible in the scheduler, therefore whole store replacement is needed.
Do I have an alternative approach to this with bryntum scheduler? Can I use another store type that it's meant only for a local update, not used as a middleware for network CRUD?
Can you tell me how efficient is the call of removal on the event store?
eventStore.removeAll(false)
As usually, there is no single best approach, so I'll just give you some ideas:
There is nothing wrong with approach #1 so if it works for you then just keep it. You only need that removeAll() call for the view to update.
- If you have relatively small datasets and the user does not change them too often then your approach should be just fine
- If you have large datasets or user changes them often then you can consider a card layout with different instances of schedulers each with loaded with its data. So you wouldn't switch data but the active (visible) card.
- The above approach would also be good if you load data from the server
There is nothing wrong with approach #1 so if it works for you then just keep it. You only need that removeAll() call for the view to update.
The dataset is actually pretty big, but we'll load respective areas based on the visible region so the store shouldn't be bloated with data.
Infinite scroll would be awesome and of great help for dynamic loading. I saw on the forum that it's planned for Q1 2020, can I somehow vote for it?
Approach #1 currently works the best for our scenario so we'll stick to it. Good to hear that's it still an efficient method.
Thank you for your recommendations and help.
Cheers,
Renato
Infinite scroll would be awesome and of great help for dynamic loading. I saw on the forum that it's planned for Q1 2020, can I somehow vote for it?
Approach #1 currently works the best for our scenario so we'll stick to it. Good to hear that's it still an efficient method.
Thank you for your recommendations and help.
Cheers,
Renato
Hello.
I checked this test case with latest scheduler 2.2.2. I noted it throws exceptions because there are duplicate ids, so I updated code to avoid duplicated ids:
After that demo runs and reported issue is no longer reproducible: row height doesn't grow with each iteration and is calculated properly.
Is this issue still reproducible on your end with latest release, if you don't remove all events manually?
I checked this test case with latest scheduler 2.2.2. I noted it throws exceptions because there are duplicate ids, so I updated code to avoid duplicated ids:
const subresources = arr.map((el, index) => {
return {
// exception gets thrown on deletion from store as the id from resource and sub-resource is the same
//id: "resource-" + id.toString(), // commented this one as it gives id collision and events are not drawn
id: "subresource-" + (id + index).toString(), // added + index here
name: "subresource-" + i * 10 + index
}
});
let model: FooResource;
model = new FooResource({
id: "resource-" + id.toString(),
name: "resource-" + i,
foobars: subresources,
});
Is this issue still reproducible on your end with latest release, if you don't remove all events manually?
Hello Maxim, I apologize for the late reply.
Correct, using different ids fixed the issue, the bug is not present any more.
I reported that on a different thread here on forum and I see you've already added a fix for it. Nice
Cheers,
Renato
Correct, using different ids fixed the issue, the bug is not present any more.
I reported that on a different thread here on forum and I see you've already added a fix for it. Nice
Cheers,
Renato