Mitigating Perspective Distortion-induced Shape Ambiguity in Image Crops
ECCV 2024
-
University of Illinois Urbana-Champaign
Objects undergo varying amounts of perspective distortion as they move across a camera's field of view. Models for predicting 3D from a single image often work with crops around the object of interest and ignore the location of the object in the camera's field of view. We note that ignoring this location information further exaggerates the inherent ambiguity in making 3D inferences from 2D images and can prevent models from even fitting to the training data. To mitigate this ambiguity, we propose Intrinsics-Aware Positional Encoding (KPE), which incorporates information about the location of crops in the image and camera intrinsics. Experiments on three popular 3D-from-a-single-image benchmarks: depth prediction on NYU, 3D object detection on KITTI & nuScenes, and predicting 3D shapes of articulated objects on ARCTIC, show the benefits of KPE.
Citation
@inproceedings{Prakash2024Ambiguity,
author = {Prakash, Aditya and Gupta, Arjun, and Gupta, Saurabh},
title = {Mitigating Perspective Distortion-induced Shape Ambiguity in Image Crops},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2024}
}
Template from this website