Most current neural networks for reconstructing surfaces from point clouds ignore sensor poses and only operate on raw point locations. Sensor visibility, however, holds meaningful information regarding space occupancy and surface orientation. In this paper, we present two simple ways to augment raw point clouds with visibility information, so it can directly be leveraged by surface reconstruction networks with minimal adaptation. Our proposed modifications consistently improve the accuracy of generated surfaces as well as the generalization ability of the networks to unseen shape domains. </a>
@inproceedings{sulzer2022deep, title={Deep surface reconstruction from point clouds with visibility information}, author={Sulzer, Raphael and Landrieu, Loic and Boulch, Alexandre and Marlet, Renaud and Vallet, Bruno}, booktitle={2022 26th International Conference on Pattern Recognition (ICPR)}, pages={2415--2422}, year={2022}, organization={IEEE} }