quark.onnx.finetuning.create_torch.quant_gemm_ops
#
Module Contents#
Classes#
- class quark.onnx.finetuning.create_torch.quant_gemm_ops.QGemm(transA: int = 0, transB: int = 0, **kwargs: Any)#
A wrapper for torch layer’s input/weight/bias quantization