-
Notifications
You must be signed in to change notification settings - Fork 71
easy - add compile flag to configs #634
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -8,6 +8,7 @@ max_req_tokens: 1024 | |
| max_res_tokens: 2048 | ||
| model: "Qwen/Qwen3-8B" | ||
| off_by_n: 1 # Off by one by default | ||
| compile: true # Enable torch.compile for trainer/ref_model, and CUDA graphs for vLLM | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why not enabling it by default if you're updating all of the configs?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. wdym by "enabling it by default"? We still need to expose the flag because compile can be tricky in some setups. It also add a bit of warmup time, so if someone is just quickly testing something, they may want to set it to false
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I see. I was suggesting that to reduce the number of hyper parameters in the yaml config because
Not a big deal. |
||
|
|
||
| # Observability configuration | ||
| metric_logging: | ||
|
|
@@ -32,7 +33,7 @@ policy: | |
| model: ${model} | ||
| tensor_parallel_size: 2 | ||
| pipeline_parallel_size: 1 | ||
| enforce_eager: false | ||
| enforce_eager: ${not:${compile}} | ||
| sampling_params: # https://docs.vllm.ai/en/v0.10.0/api/vllm/sampling_params.html#vllm.sampling_params.SamplingParams | ||
| n: ${group_size} | ||
| max_tokens: ${max_res_tokens} | ||
|
|
@@ -59,7 +60,7 @@ trainer: | |
| dtype: bfloat16 | ||
| gc_freq: 1 | ||
| compile: | ||
| enable: false | ||
| enable: ${compile} | ||
| parallelism: | ||
| data_parallel_replicate_degree: 1 | ||
| data_parallel_shard_degree: -1 | ||
|
|
@@ -100,7 +101,7 @@ ref_model: | |
| dtype: bfloat16 | ||
| gc_freq: 1 | ||
| compile: | ||
| enable: false | ||
| enable: ${compile} | ||
| parallelism: | ||
| data_parallel_replicate_degree: 1 | ||
| data_parallel_shard_degree: 1 | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file should have been removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
its there: https://github.com/meta-pytorch/torchforge/blob/main/.meta/mast/qwen3_4b_mast.yaml
But i can delete it in this PR if you want
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OOf, why are there so many configs.
Yes, i missed it in https://github.com/meta-pytorch/torchforge/pull/632/files . Please just remove it.