Skip to content

Commit 7d55291

Browse files
henrylhtsangfacebook-github-bot
authored andcommitted
rename is_b200 to is_blackwell (#646)
Summary: NA Reviewed By: q10, sryap Differential Revision: D87106508
1 parent f686d01 commit 7d55291

File tree

1 file changed

+5
-3
lines changed

1 file changed

+5
-3
lines changed

tritonbench/operators/blackwell_attentions/operator.py

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,9 @@
120120
torch.cuda.is_available() and torch.version.cuda and torch.version.cuda >= "12.4"
121121
)
122122

123-
IS_B200 = is_cuda() and "B200" in get_nvidia_gpu_model()
123+
IS_BLACKWELL = is_cuda() and (
124+
"B200" in get_nvidia_gpu_model() or "B300" in get_nvidia_gpu_model()
125+
)
124126

125127

126128
def parse_op_args(args: List[str]):
@@ -385,7 +387,7 @@ def xformers_splitk(
385387
)
386388

387389
@register_benchmark(
388-
enabled=IS_B200 and _is_sdpa_cudnn_attention_available(),
390+
enabled=IS_BLACKWELL and _is_sdpa_cudnn_attention_available(),
389391
label=f"cudnn-sdpa-{torch.backends.cudnn.version()}",
390392
)
391393
def cudnn_sdpa(self, q, k, v):
@@ -398,7 +400,7 @@ def cudnn_sdpa(self, q, k, v):
398400
)
399401

400402
@register_benchmark(
401-
enabled=(IS_B200 and HAS_FLASH_CUTE), label="FAv4", fwd_only=True
403+
enabled=(IS_BLACKWELL and HAS_FLASH_CUTE), label="FAv4", fwd_only=True
402404
)
403405
def cutedsl_blackwell(
404406
self, q: torch.Tensor, k: torch.Tensor, v: torch.Tensor

0 commit comments

Comments
 (0)