Skip to main content
Ctrl+K
AMD Logo
Quark
  • GitHub

Quark 0.2.0 documentation

  • What's New
  • Quark Overview
  • Installation
  • Getting Started
  • Highlight Features
  • User Guide
  • APIs
    • Quantization
    • Export
    • Quantizer Configuration
    • Exporter Configuration
    • Quantization
    • Optimization
    • Calibration
    • ONNX Quantizer
    • QDQ Quantizer
    • Configuration
    • Quantization Utilities
  • Examples
  • Release Note
  • FAQ

quark.onnx.finetuning.create_torch.quant_gemm_ops

Contents

  • Module Contents
    • Classes
      • QGemm

quark.onnx.finetuning.create_torch.quant_gemm_ops#

Module Contents#

Classes#

class quark.onnx.finetuning.create_torch.quant_gemm_ops.QGemm(transA: int = 0, transB: int = 0, **kwargs: Any)#

A wrapper for torch layer’s input/weight/bias quantization

Contents
  • Module Contents
    • Classes
      • QGemm

Last updated on May 29, 2024.

  • Terms and Conditions
  • Quark Licenses and Disclaimers
  • Privacy
  • Trademarks
  • Statement on Forced Labor
  • Fair and Open Competition
  • UK Tax Strategy
  • Cookie Policy
  • Cookie Settings
© 2023 Advanced Micro Devices, Inc