We're re-implementing the method using this repo and encountered a mismatch between the provided Gaze360 annotations and the dataset downloaded from the official Gaze360 site. The original dataset stores repeated head/body images in a shared folder structure. Your annotations reference different filenames, including paths with Face/, which do not exist in the original structure.
Questions:
- How are the annotation filenames mapped to the official Gaze360 dataset?
- Is there a preprocessing step or script that restructures the dataset?
- Did we miss any dataset preparation details not mentioned in the paper?
The paper (p.4439) only explains how 3D gaze vectors are converted to 2D, but doesn’t clarify this filename/folder mapping.
Thanks in advance.