Hi,
Most datasets we receive are PointCloud datasets.
Regularly, they are very large datasets.
When creating a Production, using adaptive tiling option set to use 16gb RAM, it takes approx. 10mins to process each tile. And the current dataset has a few thousand tiles.
I have seen this same performance over a 2-3 year period, regardless of the processor being used.
I've been advised that the CPU speed makes a difference, but that does not appear to be the case.
Can there be renewed focus applied to the production process when using PointCloud data in an effort to speed this up?
You don't need to limit processing to 16GB if no need to retouch it later. It works with 64GB or 128GB tiles just excellent but you lose ability to retouch them later as there are too many triangles. Also the adaptive tiling is not correctly calculating needed RAM for Point clouds or ultra setting.
To add more context to this - and hopefully a sense of urgency.
The current dataset I'm working with continues to grow, requiring more areas to be processed.
See attached image.
Current area has 576 tiles. It has been processing, using two (2) engines, for 30hrs and completed 192 tiles.
Extrapolating that out, it will be more than 4 days (91 hrs approx.) to complete the tiles and will then be at least another 24hrs to generate the LOD, before being completed.
Once that completes, I have 6 more areas to build (5 shown in attached image). That will be more than 20 days of processing - using 2 engines.
This really needs some optimisation.