But I really need to fetch all the node in one time.
This is how the script works in it's complete form :
Load elements in the given directory (the part where the problem is)
Load a CSV file with the meaningly same datas as in the stored elements
Add new elements to eZ
Modify existing elements in eZ
Delete non existing elements (an element is removed it exists in eZ but not in the CSV file)
So, the first part has to load all the node to make the comparison with the CSV file. If all the elements are not loaded, I will not be able to see if an element from the CSV has to be added or to be modified.
The element used for comparison is composed of two attributes (not the name, because it would be to easy...), it is why I have to load the DataMap of each node...
Your solution is very interesting because I can use it in my other importation scripts. And it's wonderfull not to have to edit CSV file to re-run the cronjob from where it crashed !
I doesn't thought eZ was storing so much datas in its caches when you are using PHP directly.
> I doesn't thought eZ was storing so much datas in its caches when you are using PHP directly.
It is, we have wanted to add a cache handler that manages in memory cache and provides an general cache api with handler support for years, so we can move parts of cache to for instance memecached and so, fix these memory issues, simplify cache code and possibly optimize stuff while at it.
About import, I plan to release a new import extension very soon, SQLIImport. You'll be able to handle any data source (XML, CSV...) with only one PHP class to create and with a really simplified API to create and retrieve content objects.
Stay tuned ! :)
You must be logged in to post messages in this topic!