Threadripper

Discuss all Cyclone related issues here.
Post Reply
User avatar
smacl
V.I.P Member
V.I.P Member
Posts: 161
Joined: Tue Jan 25, 2011 5:12 pm
Full Name: Shane MacLaughlin
Company Details: Atlas Computers Ltd
Company Position Title: Managing Director
Country: Ireland
Linkedin Profile: Yes
Location: Ireland
Contact:

Re: Threadripper

Post by smacl » Wed Apr 10, 2019 9:33 am

lastools wrote:
Fri Apr 05, 2019 9:22 pm
Hello from @LAStools,

Yep. LASzip decompression on a single core is typically CPU bound (unless the data comes straight across the Internet). But LASzip was designed with parallelism for decompression in mind such that the use of multi-cores when decompressing a single LAZ file is theoretically possible. It's just a matter of writing (and then maintaining) the code. Each "chunk" of 50000 points is compressed independently from all other chucks, so that 4, 8, 16, 48, or 128 chunks could be decompressed simultaneously if you have fast file read access and many cores. However, someone (you?) would need to implement a multi-threaded decompressor. All the code is open source so go right ahead ... (-:

Regards,

Martin @rapidlasso
Hi Martin,

First off, big fan of the LAZ format and the advantages in brings in moving big point clouds around the place. In terms of an optimal format however, I think there is more needed than a faster read/writer. The issue as I see it with LAS, E57 and other point cloud transfer formats is that they invariably have to be translated to a faster spatially indexed format (e.g. octree, MNO, hierarchy of voxels, etc...) in order to be used efficiently in interactive point cloud analysis. There is a clear requirement to quickly extract subsets of points based on various spatial queries or rendering culls, and to load or unload tiles or similar larger groups from storage. I would be great to have an open point cloud exchange format that also worked as an efficient native format.

That said, if I get a bit of free time, I'll have a look at multi-threading the LAZ I/O,

Shane

Scott
V.I.P Member
V.I.P Member
Posts: 741
Joined: Tue Mar 29, 2011 7:39 pm
Full Name: Scott Page
Company Details: Scott Page Design- Architectural service
Company Position Title: Owner
Country: USA
Linkedin Profile: No
Contact:

Re: Threadripper

Post by Scott » Wed Apr 10, 2019 3:29 pm

smacl wrote:
Wed Apr 10, 2019 9:33 am
lastools wrote:
Fri Apr 05, 2019 9:22 pm
Hello from @LAStools,
Yep. LASzip decompression on a single core is typically CPU bound (unless the data comes straight across the Internet). But LASzip was designed with parallelism for decompression in mind such that the use of multi-cores when decompressing a single LAZ file is theoretically possible. It's just a matter of writing (and then maintaining) the code. Each "chunk" of 50000 points is compressed independently from all other chucks, so that 4, 8, 16, 48, or 128 chunks could be decompressed simultaneously if you have fast file read access and many cores. However, someone (you?) would need to implement a multi-threaded decompressor. All the code is open source so go right ahead ... (-:
Regards,

Martin @rapidlasso
Hi Martin,
First off, big fan of the LAZ format and the advantages in brings in moving big point clouds around the place. In terms of an optimal format however, I think there is more needed than a faster read/writer. The issue as I see it with LAS, E57 and other point cloud transfer formats is that they invariably have to be translated to a faster spatially indexed format (e.g. octree, MNO, hierarchy of voxels, etc...) in order to be used efficiently in interactive point cloud analysis. There is a clear requirement to quickly extract subsets of points based on various spatial queries or rendering culls, and to load or unload tiles or similar larger groups from storage. I would be great to have an open point cloud exchange format that also worked as an efficient native format.
That said, if I get a bit of free time, I'll have a look at multi-threading the LAZ I/O,
Shane
An interesting topic (and analysis) -perhaps it is time to start a new tread topic on 'cloud transfer formats'?

User avatar
lastools
V.I.P Member
V.I.P Member
Posts: 143
Joined: Tue Mar 16, 2010 3:06 am
Full Name: Martin Isenburg
Company Details: rapidlasso - fast tools to catch reality
Company Position Title: creators of LAStools and LASzip
Country: Germany
Skype Name: isenburg
Linkedin Profile: Yes
Contact:

Re: Threadripper

Post by lastools » Wed Apr 10, 2019 11:13 pm

Hello from @LAStools,
First off, big fan of the LAZ format and the advantages in brings in moving big point clouds around the place. In terms of an optimal format however, I think there is more needed than a faster read/writer. The issue as I see it with LAS, E57 and other point cloud transfer formats is that they invariably have to be translated to a faster spatially indexed format (e.g. octree, MNO, hierarchy of voxels, etc...) in order to be used efficiently in interactive point cloud analysis. There is a clear requirement to quickly extract subsets of points based on various spatial queries or rendering culls, and to load or unload tiles or similar larger groups from storage. I would be great to have an open point cloud exchange format that also worked as an efficient native format.
Shane, there is an (open) extension to the LAZ format that does exactly that. It is called LASindex or LAX. It was optimized for 2D queries (aka aerial and mobile LiDAR collects) and is based on a 2D quadtree decomposition of the plane but there is nothing preventing us from extending this to a full 3D version using an octree. The nice thing is that it works on top of the existing LAZ or LAS file and even for other seek-able format that the LASlib has a driver for such as NASA's QFIT format or Terrasolid's old BIN format. This effectiveness of this spatial index depends in the order of points in the file. The free lasoptimize.exe - which arranges the points in a space-filling curve order - can improve this order to the near optimum. Below is a video of me explaining details about LASindex and the LAX format at the ELMF conference in Salzburg. And the really really nice thing: it's already implemented in the tools and in the LASzip DLL. Just add '-lax' to your laszip.exe command and you have spatially indexed files. Or use lasoptimize.exe instead of laszip.exe to compress the points into a "better order" for spatial queries. And you can readily enable spatially-accelerated "area-of-interest" queries for reading LAS or LAZ files when you use LASlib or the LASzip DLL in your own software.


youtu.be/FMcBywhPgdg

Regards,

Martin @rapidlasso

User avatar
smacl
V.I.P Member
V.I.P Member
Posts: 161
Joined: Tue Jan 25, 2011 5:12 pm
Full Name: Shane MacLaughlin
Company Details: Atlas Computers Ltd
Company Position Title: Managing Director
Country: Ireland
Linkedin Profile: Yes
Location: Ireland
Contact:

Re: Threadripper

Post by smacl » Mon Apr 15, 2019 6:31 pm

Hi Martin, thanks for the feedback and links, it is something I need to spend more time on and look into in depth. It would certainly need an octree for very many applications, e.g. structures of any kind. Out of interest, can you store additional arbitrary information in a LAS file. Looking at where we're going with deep learning, I'm of the opinion that the combination of images and point clouds is going to become increasingly important, and the instrument setup and orientation likewise to make use of this in terrestrial scanning.

Post Reply

Return to “Leica Cyclone”