What’s New#

New Features (Version 0.2.0)#

  • Quark for PyTorch

  • Quark for ONNX

    • ONNX Quantizer Enhancements:

      • Multiple optimization and refinement strategies for different deployment backends.

      • Supported automatic mixing precision to balance accuracy and performance.

    • Quantization Strategy:

      • Supported symmetric and asymmetric quantization.

      • Supported float scale, INT16 scale and power-of-two scale.

      • Supported static quantization and weight-only quantization.

    • Quantization Granularity:

      • Supported for per-tensor and per-channel granularity.

    • Data Types:

      • Multiple data types are supported, including INT32/UINT32, Float16, Bfloat16, INT16/UINT16, INT8/UINT8 and BFP.

    • Calibration Methods:

      • MinMax, Entropy and Percentile for float scale.

      • MinMax for INT16 scale.

      • NonOverflow and MinMSE for power-of-two scale.

    • Custom operations:

      • “BFPFixNeuron” which supports block floating-point data type.

      • “VitisQuantizeLinear” and “VitisDequantizeLinear” which support INT32/UINT32, Float16, Bfloat16, INT16/UINT16 quantization.

      • “VitisInstanceNormalization” and “VitisLSTM” which have customized Bfloat16 kernels.

      • All custom operations only support running on CPU.

    • Advanced Quantization Algorithms:

      • Supported CLE, BiasCorrection, AdaQuant, AdaRound and SmoothQuant.

    • Operating System Support:

      • Linux and Windows.