Skip to content

about data process #47

@DIY-Z

Description

@DIY-Z

hyperIQA/data_loader.py

Lines 31 to 35 in 685d4af

elif dataset == 'koniq-10k':
if istrain:
transforms = torchvision.transforms.Compose([
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.Resize((512, 384)),

sample.append((os.path.join(root, '1024x768', imgname[item]), mos_all[item]))

From the two code snippets above, it is evident that the data is loaded as images with a size of '1024x768', with width and height dimensions of 1024 and 768 respectively. However, the Resize((512, 384)) operation rescales the dimensions to 512 and 384, resulting in a noticeable change in the aspect ratio from the original 768:1024 to 512:384. I'm curious if the same processing is applied in the experimental setup of the paper?

Plus: according to the document of pytorch Resize,the 'size' parameter of the Resize function refers to the height and width.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions