Skip to content

ONNX post-training static quantization #362

@jpata

Description

@jpata

Previously in #206 we got pytorch post-training static quantization to work, but the model was not faster in inference, probably due to some missing ops on CPU/GPU in the pytorch runtime.

However, we are currently using ONNX for inference, and ONNX has its own system of quantization: https://onnxruntime.ai/docs/performance/model-optimizations/quantization.html

We should also try to do quantization via ONNX and see if that will be faster in CMSSW.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions