quark.onnx.finetuning.create_torch.quant_matmul_ops
#
Module Contents#
Classes#
- class quark.onnx.finetuning.create_torch.quant_matmul_ops.QMatMul(**kwargs: Any)#
A wrapper for torch layer’s input/weight/bias quantization