This link tells me ort-inference supports OpenVino:
https://github.com/pytorch/ort#-inference
"ONNX Runtime for PyTorch supports PyTorch model inference using ONNX Runtime and Intel® OpenVINO™.
It is available via the torch-ort-infer python package. This package enables OpenVINO™ Execution Provider for ONNX Runtime by default for accelerating inference on various Intel® CPUs, Intel® integrated GPUs, and Intel® Movidius™ Vision Processing Units - referred to as VPU."
However when I try to use it the dependencies point to the install of torch_ort, which needs CUDA as prerequisite. I don't have either ATI or NVIDIA on this Intel-PC, and want to use the Intel-GPU.
What can I do to omit the CUDA-dependencies completely?