Hi Martin,lastools wrote: ↑Fri Apr 05, 2019 9:22 pm Hello from @LAStools,
Yep. LASzip decompression on a single core is typically CPU bound (unless the data comes straight across the Internet). But LASzip was designed with parallelism for decompression in mind such that the use of multi-cores when decompressing a single LAZ file is theoretically possible. It's just a matter of writing (and then maintaining) the code. Each "chunk" of 50000 points is compressed independently from all other chucks, so that 4, 8, 16, 48, or 128 chunks could be decompressed simultaneously if you have fast file read access and many cores. However, someone (you?) would need to implement a multi-threaded decompressor. All the code is open source so go right ahead ... (-:
Regards,
Martin @rapidlasso
First off, big fan of the LAZ format and the advantages in brings in moving big point clouds around the place. In terms of an optimal format however, I think there is more needed than a faster read/writer. The issue as I see it with LAS, E57 and other point cloud transfer formats is that they invariably have to be translated to a faster spatially indexed format (e.g. octree, MNO, hierarchy of voxels, etc...) in order to be used efficiently in interactive point cloud analysis. There is a clear requirement to quickly extract subsets of points based on various spatial queries or rendering culls, and to load or unload tiles or similar larger groups from storage. I would be great to have an open point cloud exchange format that also worked as an efficient native format.
That said, if I get a bit of free time, I'll have a look at multi-threading the LAZ I/O,
Shane