Loading...
Loading...
Activation-aware weight quantization for 4-bit LLM compression with 3x speedup and minimal accuracy loss. Use when deploying large models (7B-70B) on limited GPU memory, when you need faster inference than GPTQ with better accuracy preservation, or for instruction-tuned and multimodal models. MLSys 2024 Best Paper Award winner.
npx skill4agent add davila7/claude-code-templates awq-quantization# Default (Triton kernels)
pip install autoawq
# With optimized CUDA kernels + Flash Attention
pip install autoawq[kernels]
# Intel CPU/XPU optimization
pip install autoawq[cpu]from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name = "TheBloke/Mistral-7B-Instruct-v0.2-AWQ"
model = AutoAWQForCausalLM.from_quantized(
model_name,
fuse_layers=True # Enable fused attention for speed
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Generate
inputs = tokenizer("Explain quantum computing", return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_path = "mistralai/Mistral-7B-Instruct-v0.2"
# Load model and tokenizer
model = AutoAWQForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Quantization config
quant_config = {
"zero_point": True, # Use zero-point quantization
"q_group_size": 128, # Group size (128 recommended)
"w_bit": 4, # 4-bit weights
"version": "GEMM" # GEMM for batch, GEMV for single-token
}
# Quantize (uses pileval dataset by default)
model.quantize(tokenizer, quant_config=quant_config)
# Save
model.save_quantized("mistral-7b-awq")
tokenizer.save_pretrained("mistral-7b-awq")| Feature | AWQ | GPTQ | bitsandbytes |
|---|---|---|---|
| Speedup (4-bit) | ~2.5-3x | ~2x | ~1.5x |
| Accuracy loss | <5% | ~5-10% | ~5-15% |
| Calibration | Minimal (128-1K tokens) | More extensive | None |
| Overfitting risk | Low | Higher | N/A |
| Best for | Production inference | GPU inference | Easy integration |
| vLLM support | Native | Yes | Limited |
quant_config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM" # Best for batch sizes > 1
}quant_config = {
"version": "GEMV" # 20% faster for batch_size=1
}from transformers import AwqConfig, AutoModelForCausalLM
config = AwqConfig(
bits=4,
version="marlin" # 2x faster on A100/H100
)
model = AutoModelForCausalLM.from_pretrained(
"TheBloke/Mistral-7B-AWQ",
quantization_config=config
)config = AwqConfig(
bits=4,
version="exllama" # Faster prefill, AMD GPU support
)from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"TheBloke/zephyr-7B-alpha-AWQ",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("TheBloke/zephyr-7B-alpha-AWQ")from transformers import AwqConfig, AutoModelForCausalLM
config = AwqConfig(
bits=4,
fuse_max_seq_len=512, # Max sequence length for fusing
do_fuse=True # Enable fused attention/MLP
)
model = AutoModelForCausalLM.from_pretrained(
"TheBloke/Mistral-7B-OpenOrca-AWQ",
quantization_config=config
)from vllm import LLM, SamplingParams
# vLLM auto-detects AWQ models
llm = LLM(
model="TheBloke/Llama-2-7B-AWQ",
quantization="awq",
dtype="half"
)
sampling = SamplingParams(temperature=0.7, max_tokens=200)
outputs = llm.generate(["Explain AI"], sampling)| Model | FP16 | AWQ 4-bit | Reduction |
|---|---|---|---|
| Mistral 7B | 14 GB | 5.5 GB | 2.5x |
| Llama 2-13B | 26 GB | 10 GB | 2.6x |
| Llama 2-70B | 140 GB | 35 GB | 4x |
| Model | Prefill (tok/s) | Decode (tok/s) | Memory |
|---|---|---|---|
| Mistral 7B GEMM | 3,897 | 114 | 5.55 GB |
| TinyLlama 1B GEMV | 5,179 | 431 | 2.10 GB |
| Llama 2-13B GEMM | 2,279 | 74 | 10.28 GB |
| Model | FP16 | AWQ 4-bit | Degradation |
|---|---|---|---|
| Llama 3 8B | 8.20 | 8.48 | +3.4% |
| Mistral 7B | 5.25 | 5.42 | +3.2% |
| Qwen2 72B | 4.85 | 4.95 | +2.1% |
# Use custom dataset for domain-specific models
model.quantize(
tokenizer,
quant_config=quant_config,
calib_data="wikitext", # Or custom list of strings
max_calib_samples=256, # More samples = better accuracy
max_calib_seq_len=512 # Sequence length
)
# Or provide your own samples
calib_samples = [
"Your domain-specific text here...",
"More examples from your use case...",
]
model.quantize(tokenizer, quant_config=quant_config, calib_data=calib_samples)model = AutoAWQForCausalLM.from_quantized(
"TheBloke/Llama-2-70B-AWQ",
device_map="auto", # Auto-split across GPUs
max_memory={0: "40GB", 1: "40GB"}
)# Reduce batch size
model.quantize(tokenizer, quant_config=quant_config, max_calib_samples=64)# Enable fused layers
model = AutoAWQForCausalLM.from_quantized(model_name, fuse_layers=True)# Use ExLlama backend
config = AwqConfig(bits=4, version="exllama")