This README file was generated on 2025-10-26 by Giancarlo Pereira

GENERAL INFORMATION

Title of Dataset: Data of LookUp3D: Data-Driven 3D Scanning

Author Information Name: Giancarlo Pereira ORCID: 0000-0002-5869-6886 Institution: New York University Email: giancarlo.pereira@nyu.edu

Principal investigator Information Name: Daniele Panozzo ORCID: 0000-0003-1183-2454 Institution: New York University Email: panozzo@nyu.edu

Date of data collection: 2025

Geographic location of data collection: New York, NY, USA

Funding sources: NSF OAC-2411349, NSF OAC-2411221

SHARING/ACCESS INFORMATION

Licenses/restrictions placed on the data: Creative Commons Attribution 4.0 International

Links to other publicly accessible locations of the data: www.github.com/geometryprocessing/scanner www.github.com/geometryprocessing/scanner-capture

DATA & FILE OVERVIEW

NOTE: for more information on each folder, please access their respective README.md files.

DATA

  • analog_calibration: raw captured data of the calibration process with our analog set-up
  • dynamic_scenes: raw captured data of two dynamic scenes (bunny_drop and fan)
  • lookup_table: calibrated lookup table with our analog set-up for dynamic scenes

POINT_CLOUDS

  • extra: contains extra point clouds not rendered in the paper
  • figure_3_projector_comparison: 8 PLYs (4 bunny, 4 pawn) comparing traditional structured light and LookUp3D with expensive DLP projector and cheap LCD projector (Figure 3 in main text)
  • figure_4_static_9channel: Figure 4 in main text
  • figure_5_static_11channel_lights_on: Figure 5 in main text
  • figure_6_static_to_dynamic_implementation_details: 4 PLYs showing a single frame of a water balloon reconstructed at 450fps. The naive method has lots of failures, denoising improves reconstruction, coarse-to-fine (C2F) leverages neighborhood information, temporal consistency (TC) leverages previous frame (Figure 6 in main text)
  • figure_8_fan_speed_colormap: 5 frames of fan spinning, with coolwarm colormap from 0 to 5 meters per second (Figure 8 in main text)
  • figure_9_commercial_comparison: 4 PLYs comparing our method and Kinect, RealSense, andPhotone of fan spinning (Figure 9 in main text)
  • figure_10_30_dynamic_100fps: Figure 10 in main text and Figure in Supplemental text
  • figure_13_28_29_dynamic_450fps: Figure 13 in main text, Figures 28 and 29 in Supplemental text
  • figure_14_ynamic_1450fps: Figure 14 in main text
  • figure_17_bunny_in_large_volume_of_reconstruction: 3 frames of bunny moving through larger volume of reconstruction (Figure 17 in Supplemental text)
  • ground_truth_objects: ground truth files to which we compare pawn, bunny, chair, dodo, and house
  • figure_16_hand_lights_on_300fps: 5 frames of hand reconstructed with ceiling lights on (Figure 16 in Supplemental text)
  • figure_19_pawn: 12 PLYs of pawn reconstructed with LookUp3D with various patterns (Figure 19 in Supplemental text)
  • figure_24_spline: comparison of pawn using our denoised lookup search and a spline-fitted lookup search (Figure 24 in Supplemental text)
  • figure_25_coarse-to-fine: 6 PLYs comparing naive reconstruction and coarse-to-fine in static (Figure 25 in Supplemental text)
  • figure_27_mlp: comparison of pawn using our naive lookup search and a multilayer perceptron (Figure 27 in Supplemental text)

METHODOLOGICAL INFORMATION

Description of methods used for collection/generation of data: LookUp3D and Traditional Structured Light scanning.

Instrument- or software-specific information needed to interpret the data: any software that can open PLY files (we recommend MeshLab) for the point clouds. To process any of the data shared, please check our GitHub repository which contains the necessary Python libraries and scripts.

People involved with sample collection, processing, analysis and/or submission: Giancarlo Pereira, Yidan Gao, Yurii Piadyk, David Fouhey, Claudio T. Silva, Daniele Panozzo.