Skip to content

Single video inference fails with cuda #24

@StefanFabian

Description

@StefanFabian

Thank you for this great tool to compare video quality!
Unfortunately, I'm having an issue when using cuda with the new model version.
Using CPU everything works fine.

$ python3 third_party/uvq/uvq_inference.py --model_version 1.5 --device cuda --output tmp.txt $(pwd)/results/input.mp4 
Traceback (most recent call last):
  File "/path/to/uvq/uvq_inference.py", line 290, in <module>
    main()
  File "/path/to/uvq/uvq_inference.py", line 207, in main
    run_single_inference(args)
  File "/path/to/uvq/uvq_inference.py", line 163, in run_single_inference
    results = uvq_inference.infer(
              ^^^^^^^^^^^^^^^^^^^^
  File "/path/to/uvq/uvq1p5_pytorch/utils/uvq1p5.py", line 145, in infer
    prediction_batch = self.uvq1p5_core(batch)
                       ^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/uvq/uvq1p5_pytorch/utils/uvq1p5.py", line 59, in forward
    content_features = self.content_net(video)
                       ^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/uvq/uvq1p5_pytorch/utils/contentnet.py", line 186, in forward
    feature = self.predict_and_get_features(input_video)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/uvq/uvq1p5_pytorch/utils/contentnet.py", line 167, in predict_and_get_features
    features = self.model(frame)
               ^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/uvq/uvq1p5_pytorch/utils/contentnet.py", line 135, in forward
    return self.features(x)
           ^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/container.py", line 244, in forward
    input = module(input)
            ^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/container.py", line 244, in forward
    input = module(input)
            ^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/uvq/uvq1p5_pytorch/utils/custom_nn_layers.py", line 124, in forward
    return self.conv(x_padded)
           ^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1773, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1784, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 548, in forward
    return self._conv_forward(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/path/to/venv/lib/python3.12/site-packages/torch/nn/modules/conv.py", line 543, in _conv_forward
    return F.conv2d(
           ^^^^^^^^^
RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

The problem happens as well when using batch processing; the command runs through all files, but the resulting file is empty.
When going through batch processing with CPU, the resulting file contains the expected data (input file name and UVQ score).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions