Skip to content

Commit 137705f

Browse files
committed
fix
1 parent 0ca8c35 commit 137705f

File tree

1 file changed

+0
-5
lines changed

1 file changed

+0
-5
lines changed

src/compressed_tensors/compressors/quantized_compressors/int4_quantized.py

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -126,11 +126,6 @@ def compress_weight(
126126
:param device: optional device to move compressed output to
127127
:return: dictionary of compressed weight data
128128
"""
129-
if global_scale is not None:
130-
raise ValueError(
131-
"global_scale is not supported for the PackQuantizationCompressor"
132-
)
133-
134129
compressed_dict = {}
135130
if can_quantize(weight, quantization_args):
136131
quantized_weight = quantize(

0 commit comments

Comments
 (0)