-
Notifications
You must be signed in to change notification settings - Fork 178
fix: add theoretical TFlops for H200 GPU #1422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Added the theoretical TFlops for H200 GPUs which is equivalent to H100 80GB HBM3 estimates. Signed-Off-By: Robert Clark <[email protected]>
📝 WalkthroughWalkthroughThe pull request adds theoretical TFLOPS benchmark entries for NVIDIA H200 GPUs in bfloat16 and float32 data types to the Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
terrykong
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@guyueh1 to review
|
@guyueh1 bump |
|
closing in favor of #1543 which has some tests |
What does this PR do ?
Added the theoretical TFlops for H200 GPUs to measure the process efficiency.
Issues
N/A
Usage
N/A
Before your PR is "Ready for review"
Pre checks:
Additional Information
$ python3 -c 'import torch; print(torch.cuda.get_device_name())' NVIDIA H200Summary by CodeRabbit