Skip to content

Camera positioning ideas for point cloud color and pose export #356

@TokyoWarfare

Description

@TokyoWarfare

Hello,

Inspired by :
https://www.youtube.com/watch?v=EUbAaJp-XmY&lc=UgyJukqQBdabqPMk6xV4AaABAg.ATFRM6RYNrbATJozZe1K5m

My plan for the camera coloring, initially where like this:

Do the SLAM, obtain final trajectory and having a trajectory with coordintates & time. Place the camera along the trajectory (+extrinsics) based on the frame time.

I'm quite confident that sensors like the Portalcam must be doing somethign like this.

In addition the video shows a post process workflow, which looks awesome as packing everything in bags can make theese huge. For example the osmo360 camera, captures videos of tens of GB once you record in the 20 min range.

For the Osmo what I've just done is to follow this repo
https://github.com/dji-sdk/Osmo-GPS-Controller-Demo

I managed to get geotagged panoramic shot, since I'm pushing RTK it should be quite accurate. I plan to capture RINEX aswell to try to refine further with PPK, but not yet done. But my point is.

For stills, the pictures will be very close from their location as lidar is aswell connected to same GPS. So at this point there could be three solutions

  1. Try to color the cloud with camera positions as imported.

  2. Refine camera positions using the method in the video, too tedious for a capture of sensor mounted on car taking a large number of pictures. Edit: (The practial alternative to this would be to do this in Reality Capture and after matching control points run SFM)

  3. Position camera along the SLAM path based on gnss time(+extrinsic pos shift). And may be have option to shift time to move all the dataset backwards forward to match position as consumer cameras are not genlocked. May be the workflow would be to do so with a group os 200 frames to obtain the time shift and then dump the whole image dataset.

Then recolor and ideally be able to export the camera poses in colamp format so 3DGS can be computed in third party soft like LItchfeld or Postshoot. Edit: Or eventually export cameras and recolor in thir party software as SFM software tends to do a good job with colro blending, applying masks etc. Hence use the SLAM pipeline to do a basic coloring or a prep of data for final color in third party software.

I can not yet provide sample data, but I'm 99,9% finished with the rig in terms of device integration, I just need to find a way to export geotagged video frames as the panoramic 120MP stills tike 4secs and 30MP take 2 secs to proces and seems a bit too much for vehicle mounted even at 50K/hm speeds. I've a forward facing 4k 30 camera that is captured in the bag too, may be the 30MP at 2s could be enough but as said I've not yet tested this version of the rig. Parts for finall assembly are arriving theese days. Iwill be able to provide data in 1-2weeks, yet, I whanted to start this discussion to polish ideas.

edit: At the moment I don´t know how to extract geotagged frames.
dji-sdk/Osmo-GPS-Controller-Demo#18

If all is enlcosed within propietari format my plan is that since the capture is triggered from the ESP32. Weteher to

1)Log tthe capture start time/ end time.
2)In post add the geo information based on the PPK data or log the NMEA/GGA feed of the ESP32 to a log and merge in psot... something like that... But adds more work... :\

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions