Skip to content

Questions on UVQ #4

@samujjwaldey

Description

@samujjwaldey

I have the following questions on UVQ.

  1. I am using the command "python3 uvq_main.py --input_files='Gaming_1080P-0ce6_orig,20,Gaming_1080P-0ce6_orig.mp4' --output_dir results --model_dir models" as mentioned at the link "https://github.com/google/uvq" to generate the file "Gaming_1080P-0ce6_orig_label_distortion.csv" at the path "results\Gaming_1080P-0ce6_orig\features". The file "Gaming_1080P-0ce6_orig_label_distortion.csv" contains 20 rows and 104 columns of distortion scores where each of the 20 rows correspond to each second of the nearly 20 second sample video "Gaming_1080P-0ce6_orig.mp4". Can someone please tell me how to correlate/map these distortion scores to the 25 different distortion types and their levels (each distortion type seems to have 5 levels) as mentioned at the following link:
    http://database.mmsp-kn.de/kadid-10k-database.html

  2. What is the range of the distortion scores in the cells of the file "Gaming_1080P-0ce6_orig_label_distortion.csv" ? Is 0 the minimum possible score (indicating minimum distortion) and 1 the maximum possible score (indicating maximum distortion) ?

  3. Is there a limit to the length of a video that can be analyzed by UVQ ?

  4. Is there a restriction on the codec or container format of a video that can be analyzed by UVQ or are videos belonging to all different types of codec and container formats supported ?

  5. There is a binary file related to distortion generated in the "results\Gaming_1080P-0ce6_orig\features" folder which is "Gaming_1080P-0ce6_orig_feature_distortion.binary" which "https://github.com/google/uvq" says contains "UVQ raw features (25600 float numbers per 1s chunk)". Is this "Gaming_1080P0ce6_orig_feature_distortion.binary" file just an intermediate output file that is generated by UVQ in the process of generating the final output file "Gaming_1080P-0ce6_orig_label_distortion.csv" or is there a way or need for the user to interpret the content inside this binary file perhaps by using some certain tool etc. ?

  6. I understand UVQ is designed to work well on user-generated content, where there is no pristine reference. Do you think UVQ will also work well on videos generated by Broadcasters / Service Providers etc. where there might be pristine references available ?

  7. Are there any Release Notes/Additional documentation on UVQ apart from the ones we have at the link "https://github.com/google/uvq" ?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions