Nvidia RTX 2080 Ti

CAD Workstations, Network Attached Storage [NAS], Servers, Monitors, SSD`s, Imaging Systems, Field Kit: Targets & Tripods etc.
jedfrechette
V.I.P Member
V.I.P Member
Posts: 1236
Joined: Mon Jan 04, 2010 7:51 pm
14
Full Name: Jed Frechette
Company Details: Lidar Guys
Company Position Title: CEO and Lidar Supervisor
Country: USA
Linkedin Profile: Yes
Location: Albuquerque, NM
Has thanked: 62 times
Been thanked: 219 times
Contact:

Re: Nvidia RTX 2080 Ti

Post by jedfrechette »

smacl wrote: Sun Oct 28, 2018 3:30 pmNot sure where other software house are with this, but the latest version of SCC has its first bit of GPU code for real-time normal generation,
At the risk of going off-topic, is this normal data accessible to the user, beyond just viewport display? I've been looking for a good solution to estimate high quality surface normals for mobile lidar data so if you've got a solution that might work for that I'd certainly be interested in testing.
Jed
fillway
I have made 30-40 posts
I have made 30-40 posts
Posts: 31
Joined: Thu Sep 06, 2018 6:51 pm
5
Full Name: Philip Weyhe
Company Details: 3DGS
Company Position Title: CTO
Country: USA
Linkedin Profile: Yes
Has thanked: 1 time
Contact:

Re: Nvidia RTX 2080 Ti

Post by fillway »

smacl wrote: Tue Oct 30, 2018 8:12 am At a guess, hardware RT cores wouldn't be of much benefit to laser scanning as points produced by laser scanning are already effectively the results of tracing a single ray from the laser to the first point of incidence (and back again). From what I gather, the rays traced need to bounce of triangles for these cores. There's a good article on it here. That said, once the hardware is there, no doubt somebody will figure other uses for it. With respect to points from mirrors, it should be possible to remove them using cloud to cloud comparison from different setups, or other comparisons between sets of distances. This could certainly benefit from being written for the GPU though not RT cores specifically. The algorithms for static scanning and SLAM would be different and having the instrument position would help. If you've got a sample data set with these type of issues that you can share in E57, I'll have a look at it. It is worth remembering that the GPU is being under used already for general purpose processing, so us poor code monkeys have to get that sorted first for all sorts of processing :)
I don't have a particular data set, just general questions I was hoping to impose on those most likely to have the best understanding on the fundamentals of LS technology.

Still speaking from a standpoint of relative ignorance, I guess the question becomes whether the technology is useful internally in the scanner itself to get cleaner data while it is being captured, as well as being able to help later in processing. As these parts have just begun shipping, and as you said the code is still somewhat lacking in even taking advantage of the GPU in the first place, I guess time will tell!

Much thanks to you poor code monkeys figuring this kind of stuff out so we don't have to! Utilizing multiple cores, CUDA, and now (maybe) RT just makes the capture and processing of data that much faster, and helps out everyone in the industry!
Philip Weyhe
CTO @ 3DGS.io
linkedin.com/in/fillway
www.3DGS.io
User avatar
smacl
Global Moderator
Global Moderator
Posts: 1409
Joined: Tue Jan 25, 2011 5:12 pm
13
Full Name: Shane MacLaughlin
Company Details: Atlas Computers Ltd
Company Position Title: Managing Director
Country: Ireland
Linkedin Profile: Yes
Location: Ireland
Has thanked: 627 times
Been thanked: 657 times
Contact:

Re: Nvidia RTX 2080 Ti

Post by smacl »

jedfrechette wrote: Tue Oct 30, 2018 5:08 pm
smacl wrote: Sun Oct 28, 2018 3:30 pmNot sure where other software house are with this, but the latest version of SCC has its first bit of GPU code for real-time normal generation,
At the risk of going off-topic, is this normal data accessible to the user, beyond just viewport display? I've been looking for a good solution to estimate high quality surface normals for mobile lidar data so if you've got a solution that might work for that I'd certainly be interested in testing.
The normals aren't currently generated beyond display resolution, though this is something I'm looking to add as a mechanism for fully automated facade extraction and improved meshing. The plan is not to store a normal for every point in the scan, as that would be prohibitively in terms of storage, but just store the significant ones (e.g. edges and points of curvature). I'll stick up another post once I've made some more progress.
Shane MacLaughlin
Atlas Computers Ltd
www.atlascomputers.ie

SCC Point Cloud module
jedfrechette
V.I.P Member
V.I.P Member
Posts: 1236
Joined: Mon Jan 04, 2010 7:51 pm
14
Full Name: Jed Frechette
Company Details: Lidar Guys
Company Position Title: CEO and Lidar Supervisor
Country: USA
Linkedin Profile: Yes
Location: Albuquerque, NM
Has thanked: 62 times
Been thanked: 219 times
Contact:

Re: Nvidia RTX 2080 Ti

Post by jedfrechette »

smacl wrote: Wed Oct 31, 2018 8:03 amThe plan is not to store a normal for every point in the scan, as that would be prohibitively in terms of storage, but just store the significant ones (e.g. edges and points of curvature).
Thinking about this naively, it seems like the worst case scenario would be if you encoded normals for all points as an array of floats. That would essentially double the storage requirements, i.e. now you need to store 6 floats for each point rather than just 3. That's a price I'd happily pay. If you used shorts instead of floats for the normal components that would reduce the size in half, before even thinking about compression. To me it doesn't seem like the extra cost of storing normals for all points is that bad compared to the relatively difficult task of figuring out which points are significant.
Jed
User avatar
smacl
Global Moderator
Global Moderator
Posts: 1409
Joined: Tue Jan 25, 2011 5:12 pm
13
Full Name: Shane MacLaughlin
Company Details: Atlas Computers Ltd
Company Position Title: Managing Director
Country: Ireland
Linkedin Profile: Yes
Location: Ireland
Has thanked: 627 times
Been thanked: 657 times
Contact:

Re: Nvidia RTX 2080 Ti

Post by smacl »

jedfrechette wrote: Sun Nov 04, 2018 7:58 pm
smacl wrote: Wed Oct 31, 2018 8:03 amThe plan is not to store a normal for every point in the scan, as that would be prohibitively in terms of storage, but just store the significant ones (e.g. edges and points of curvature).
Thinking about this naively, it seems like the worst case scenario would be if you encoded normals for all points as an array of floats. That would essentially double the storage requirements, i.e. now you need to store 6 floats for each point rather than just 3. That's a price I'd happily pay. If you used shorts instead of floats for the normal components that would reduce the size in half, before even thinking about compression. To me it doesn't seem like the extra cost of storing normals for all points is that bad compared to the relatively difficult task of figuring out which points are significant.
I don't think many point cloud softwares capable of dealing with large datasets (e.g. interactive at ~ 1 billion points per tile) store position as floats. Checking a full rgbi point cloud here in SCC which is just over 1 billion points, it comes out a 9.1gb uncompressed, or 9 bytes per point to include position, intensity and classification. Without colour it comes out at ~6 bytes per point. Adding 3 floats / 12 bytes point would add 12 gb, though with normals in the range of 0 - 1, the floating point doesn't actually float so you'd get the exact same accuracy with three bytes and I'm guessing for most applications relating to rendering and tracing 3 bytes per u,v,w would be fine. Unlikely to be an issue as you'd probably hit speed issues before memory exhaustion on a 64gb PC though it would whack up disk usage. We do something similar for storing design separation, object height and density as needed. What's your application that needs normals?
Shane MacLaughlin
Atlas Computers Ltd
www.atlascomputers.ie

SCC Point Cloud module
User avatar
geomontgomery
V.I.P Member
V.I.P Member
Posts: 186
Joined: Thu Sep 13, 2018 3:23 pm
5
Full Name: George Montgomery
Company Details: ECE Design
Company Position Title: Support Manager
Country: USA
Linkedin Profile: Yes
Has thanked: 20 times
Been thanked: 48 times

Re: Nvidia RTX 2080 Ti

Post by geomontgomery »

Wdigman - Europe wrote: Fri Oct 26, 2018 10:41 am Personally I found that point processing software is CPU intensive, the more cores and the faster the processor is what matters. CPU, solid state drives, and more memory is also where you want to spend your money.
From my personal experience this is also the case - High single core clock speed preferred overall, then muti-core speed if the application is multi-threaded. SSD over HDD, and lots (32-64BG) of low CAS latency memory. It seems as though with these being the primary contributors to performance, then it doesn't matter to go with GeForce over Quadro, in which case GeForce is lower cost vs. relative performance.

It would be interesting to see some real benchmarks posted somewhere of point cloud performance and different hardware / software combinations.
jedfrechette
V.I.P Member
V.I.P Member
Posts: 1236
Joined: Mon Jan 04, 2010 7:51 pm
14
Full Name: Jed Frechette
Company Details: Lidar Guys
Company Position Title: CEO and Lidar Supervisor
Country: USA
Linkedin Profile: Yes
Location: Albuquerque, NM
Has thanked: 62 times
Been thanked: 219 times
Contact:

Re: Nvidia RTX 2080 Ti

Post by jedfrechette »

smacl wrote: Mon Nov 05, 2018 8:51 amI'm guessing for most applications relating to rendering and tracing 3 bytes per u,v,w would be fine.
You probably don't even need that. When working with meshes, it is very common to store normal maps as 16-bit per channel images and that works just fine for surfaces that are much smoother and more continuous than anything you're going to see in a measured scanner point cloud.
smacl wrote: Mon Nov 05, 2018 8:51 amWhat's your application that needs normals?
Surface reconstruction.
Jed
dhirota
V.I.P Member
V.I.P Member
Posts: 958
Joined: Sun Nov 01, 2009 11:18 pm
14
Full Name: Dennis Hirota
Company Details: Sam O Hirota Inc
Company Position Title: President
Country: USA
Linkedin Profile: Yes
Location: Hawaii, USA
Has thanked: 87 times
Been thanked: 379 times

Re: Nvidia RTX 2080 Ti

Post by dhirota »

smacl wrote: Mon Nov 05, 2018 8:51 am
jedfrechette wrote: Sun Nov 04, 2018 7:58 pm
smacl wrote: Wed Oct 31, 2018 8:03 amThe plan is not to store a normal for every point in the scan, as that would be prohibitively in terms of storage, but just store the significant ones (e.g. edges and points of curvature).
Thinking about this naively, it seems like the worst case scenario would be if you encoded normals for all points as an array of floats. That would essentially double the storage requirements, i.e. now you need to store 6 floats for each point rather than just 3. That's a price I'd happily pay. If you used shorts instead of floats for the normal components that would reduce the size in half, before even thinking about compression. To me it doesn't seem like the extra cost of storing normals for all points is that bad compared to the relatively difficult task of figuring out which points are significant.
I don't think many point cloud softwares capable of dealing with large datasets (e.g. interactive at ~ 1 billion points per tile) store position as floats. Checking a full rgbi point cloud here in SCC which is just over 1 billion points, it comes out a 9.1gb uncompressed, or 9 bytes per point to include position, intensity and classification. Without colour it comes out at ~6 bytes per point. Adding 3 floats / 12 bytes point would add 12 gb, though with normals in the range of 0 - 1, the floating point doesn't actually float so you'd get the exact same accuracy with three bytes and I'm guessing for most applications relating to rendering and tracing 3 bytes per u,v,w would be fine. Unlikely to be an issue as you'd probably hit speed issues before memory exhaustion on a 64gb PC though it would whack up disk usage. We do something similar for storing design separation, object height and density as needed. What's your application that needs normals?
I have been tracking this thread from the beginning, the dialog on ray tracing, normals from mobile scanning and surface reconstruction are important moving into the future (depending on where you are headed?).

Shane: I have decided to spend a few US$ to help the economy and expand our research efforts by installing a EVGA Nvida RTX 2080Ti into our i9-9980XE, 18core/36thread, 128GB RAM workstation to test anything that you might produce (hopefully soon) as well as some other software. I found out that the RTX FTW3 2080 Ti requires 3 PCIe slots of space versus the RTX 2080 that requires just 2 PCIe slots of space, so decided to install a RTX 2080 to replace the GTX 1080 in our dual Xeon, 8core/16thread, 256GB RAM workstation with 75 inch 4K display monitor (we are looking at replacing the monitor with a 82 inch Samsung 8K display that should be in HNL by the end of this month) since there are no extra PCIe slots.

I am sure there is software available that is processing ray tracing and laser scanning, but the only one that I have seen in person is our NavVIS SiteMaker. It does a great job stitching the individual M6 scans and images into panoramas with ray tracing and producing the colored point clouds.

See below: image viewing location and 2D floor plan information of everything scanned is automatically generated by NavVIS software.

04-14-2019 12-00-07 PM.jpg

Another showing the removal of moving objects (took out the people in front of HNL Police Station, too many cars in the ugly HNL traffic still showing some noise on right side of street) and the merging of panoramas and scans (cars shown are parked on opposite side of street). Also notice there are no black tripod circles on the side walk where 147 panoramas were taken approximately 2 meters apart because through post-processing the black circles are removed with the ray traced panoramas.

Screenshot from 2019-03-14 13-33-28.jpg
You do not have the required permissions to view the files attached to this post.
Dennis Hirota, PhD, PE, LPLS
www.samhirota.com
[email protected]
User avatar
fredok1970
I have made 20-30 posts
I have made 20-30 posts
Posts: 21
Joined: Tue Feb 25, 2014 7:04 am
10
Full Name: Alfredo
Company Details: NAVALSCAN
Company Position Title: CEO
Country: Spain
Skype Name: alfredo.desantiago
Linkedin Profile: Yes
Has thanked: 11 times
Been thanked: 4 times

Re: Nvidia RTX 2080 Ti

Post by fredok1970 »

As the owner of a small software house, I spend most of my hours writing point cloud software

Hi I have been perusing your web And your software looks very interesting, could you send me more info and prices about it?
[email protected]

Thanks
Alfredo de Santiago
CEO / Administrador • NAVALSCAN
e: [email protected]  w: www.navalscan.com

a:TECHPARK Fuerteventura · www.ptfue.com
CANARY ISLANDS · SPAIN · SAUDI ARABIA
User avatar
smacl
Global Moderator
Global Moderator
Posts: 1409
Joined: Tue Jan 25, 2011 5:12 pm
13
Full Name: Shane MacLaughlin
Company Details: Atlas Computers Ltd
Company Position Title: Managing Director
Country: Ireland
Linkedin Profile: Yes
Location: Ireland
Has thanked: 627 times
Been thanked: 657 times
Contact:

Re: Nvidia RTX 2080 Ti

Post by smacl »

fredok1970 wrote: Mon Apr 15, 2019 7:36 am As the owner of a small software house, I spend most of my hours writing point cloud software

Hi I have been perusing your web And your software looks very interesting, could you send me more info and prices about it?
[email protected]

Thanks
Hi Alfredo, thanks for the inquiry. I've put a link here showing how to download a demo license and will send on prices via email,

Best regards,

Shane
Shane MacLaughlin
Atlas Computers Ltd
www.atlascomputers.ie

SCC Point Cloud module
Post Reply

Return to “Other Hardware: Field & Back Office”