Configuring ONNX Quantization#

Configuration of quantization in AMD Quark for ONNX is set by Python dataclass because it is rigorous and can help you avoid typos. We provide a class QConfig in quark.onnx.quantization.config.config for configuration, as demonstrated in the previous example. It can use get_default_config for predefined configurations.

The Config should be like:

from quark.onnx import QConfig
quant_config = QConfig.get_default_config("xxx")

We define some default global configurations, including XINT8 and A8W8, which can be used like this:

quant_config = QConfig.get_default_config("A8W8")

More Quantization Default Configurations#

AMD Quark for ONNX provides you with default configurations to quickly start model quantization.

  • INT8_CNN_DEFAULT: Perform 8-bit, optimized for CNN quantization.

  • INT16_CNN_DEFAULT: Perform 16-bit, optimized for CNN quantization.

  • INT8_TRANSFORMER_DEFAULT: Perform 8-bit, optimized for transformer quantization.

  • INT16_TRANSFORMER_DEFAULT: Perform 16-bit, optimized for transformer quantization.

  • INT8_CNN_ACCURATE: Perform 8-bit, optimized for CNN quantization. Some advanced algorithms are applied to achieve higher accuracy but consume more time and memory space.

  • INT16_CNN_ACCURATE: Perform 16-bit, optimized for CNN quantization. Some advanced algorithms are applied to achieve higher accuracy but consume more time and memory space.

  • INT8_TRANSFORMER_ACCURATE: Perform 8-bit, optimized for transformer quantization. Some advanced algorithms are applied to achieve higher accuracy but consume more time and memory space.

  • INT16_TRANSFORMER_ACCURATE: Perform 16-bit, optimized for transformer quantization. Some advanced algorithms are applied to achieve higher accuracy but consume more time and memory space.

AMD Quark for ONNX also provides more advanced default configurations to help you quantize models with more options.

  • UINT8_DYNAMIC_QUANT: Perform dynamic input_tensors, uint8 weight quantization.

  • XINT8: Perform int8 input_tensors, int8 weight, optimized for NPU quantization.

  • XINT8_ADAROUND: Perform int8 input_tensors, int8 weight, optimized for NPU quantization. The adaround fast finetune applies to preserve quantized accuracy.

  • XINT8_ADAQUANT: Perform int8 input_tensors, int8 weight, optimized for NPU quantization. The adaquant fast finetune applies to preserve quantized accuracy.

  • VINT8: Perform int8 input_tensors, int8 weight, optimized for VAIML quantization.

  • S8S8_AAWS: Perform int8 asymmetric input_tensors, int8 symmetric weight quantization.

  • S8S8_AAWS_ADAROUND: Perform int8 asymmetric input_tensors, int8 symmetric weight quantization. The adaround fast finetune applies to preserve quantized accuracy.

  • S8S8_AAWS_ADAQUANT: Perform int8 asymmetric input_tensors, int8 symmetric weight quantization. The adaquant fast finetune applies to preserve quantized accuracy.

  • U8S8_AAWS: Perform uint8 asymmetric input_tensors int8 symmetric weight quantization.

  • U8S8_AAWS_ADAROUND: Perform uint8 asymmetric input_tensors, int8 symmetric weight quantization. The adaround fast finetune applies to preserve quantized accuracy.

  • U8S8_AAWS_ADAQUANT: Perform uint8 asymmetric input_tensors, int8 symmetric weight quantization. The adaquant fast finetune applies to preserve quantized accuracy.

  • S16S8_ASWS: Perform int16 symmetric input_tensors, int8 symmetric weight quantization.

  • S16S8_ASWS_ADAROUND: Perform int16 symmetric input_tensors, int8 symmetric weight quantization. The adaround fast finetune applies to preserve quantized accuracy.

  • S16S8_ASWS_ADAQUANT: Perform int16 symmetric input_tensors, int8 symmetric weight quantization. The adaquant fast finetune applies to preserve quantized accuracy.

  • A8W8: Perform int8 symmetric input_tensors, int8 symmetric weight quantization and optimize for deployment.

  • A16W8: Perform int16 symmetric input_tensors, int8 symmetric weight quantization and optimize for deployment.

  • U16S8_AAWS: Perform uint16 asymmetric input_tensors, int8 symmetric weight quantization.

  • U16S8_AAWS_ADAROUND: Perform uint16 asymmetric input_tensors, int8 symmetric weight quantization. The adaround fast finetune applies to preserve quantized accuracy.

  • U16S8_AAWS_ADAQUANT: Perform uint16 asymmetric input_tensors, int8 symmetric weight quantization. The adaquant fast finetune applies to preserve quantized accuracy.

  • BF16: Perform BFloat16 input_tensors, BFloat16 weight quantization.

  • BFP16: Perform BFP16 input_tensors, BFP16 weight quantization.

  • S16S16_MIXED_S8S8: Perform int16 input_tensors, int16 weight mix-precision quantization.

Customized Configurations#

Besides the default configurations in AMD Quark for ONNX, you can also customize the quantization configuration like the following example:

from quark.onnx import ModelQuantizer, QConfig, QLayerConfig, Int8Spec, Int16Spec, CLEConfig, AdaRoundConfig

input_model_path = "demo.onnx"
quantized_model_path = "demo_quantized.onnx"
calib_data_path = "calib_data"

int8_config = QLayerConfig(input_tensors=Int8Spec, weight=Int8Spec)
cle_algo = CLEConfig(cle_steps=2)
adaround_algo = AdaRoundConfig(learning_rate=0.1, num_iterations=1000)

calib_data_reader = ImageDataReader(calib_data_path)
quantization_config = QConfig(
    global_config=int8_config,
    specific_layer_config={QLayerConfig(input_tensors=Int16Spec(), weight=Int16Spec(), biast=Int16Spec(), output_tensors=Int16Spec()): ["/layer.0/Conv_0", "/layer.11/Conv_2"]},
    layer_type_config={QLayerConfig(input_tensors=Int16Spec(), weight=Int16Spec(), biast=Int16Spec(), output_tensors=Int16Spec()): ["MatMul"] None: ["Gemm"]},
    exclude=["/layer.2/Conv_1", "^/Conv/.*", (["start_node_1", "start_node_2"], ["end_node_1", "end_node_2"])],
    algo_config=[cle_algo, adaround_algo],
    use_external_data_format=False,
    extra_options={},
)
quantizer = ModelQuantizer(quantization_config)
quantizer.quantize_model(input_model_path, quantized_model_path, calib_data_reader)