Saturday 16 January 2010 12:47:14 pm
Removing those objects through the API calls a delete and commit with every object to the backend index. This can be indeed be quite slow when there are many objects. Can you file an issue for this? In the current trunk, there are optimisations for high write traffic sites, I'll add a config option to eZ Find that disables commits on delete and add a small script that you can use after the delete operations are finished to issue a commit. (warning: before the commit, deleted objects will still show up in the search and trigger fatal errors as the eZP side calls will fail obviously) @Robin: disabling ezfind alltogether during this delete will indeed speed up the whole operation as well An alternative would be to patch Solr, so it also accepts a "commit within" parameter like it does for updates/additions ... I'll have a look at that too, the Java code to accomplish this is quite simple at first sight. hth Paul
eZ Publish, eZ Find, Solr expert consulting and training
http://twitter.com/paulborgermans
|