Conversation
Contributor
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Plus Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Collaborator
Author
|
/ok to test 49e8426 |
Collaborator
Author
|
/ok to test 136be26 |
Collaborator
Author
|
/ok to test cad830b |
Collaborator
Author
|
/ok to test 6ca0891 |
48ea1e1 to
160a754
Compare
Collaborator
|
/ok to test 160a754 |
Two self-contained recipes following existing Llama3/ESM2 recipe conventions: - bionemo-recipes/recipes/mixtral_native_te/: TE-accelerated Mixtral FSDP2 training with a Lingua-style DCLM Baseline 1.0 pre-training config for Mixtral-8x1B and 8x7B. Includes DDP and FSDP2 entry points. - bionemo-recipes/recipes/opengenome2_mixtral_native_te/: TE Mixtral for autoregressive DNA on OpenGenome2 metagenomes, mirroring opengenome2_llama_native_te (THD packing, genomic label masking, validation, nucleotide tokenizer packaged with the recipe). Key design decisions: - Self-contained KISS: fused MoE kernels (fused_a2a, fused_token_router, fused_indices_converter), collator, checkpoint, and perf logger are duplicated across both recipes rather than shared, matching repo convention. - Configurable expert parallelism via all-to-all token dispatch; expert_parallel_size=1 by default for parity with the Llama3 recipe. - MXFP8 alignment: pad post-alltoall MoE expert input to a multiple of 32 before GroupedLinear (attribute padding to the last expert so m_splits sums correctly; slice padding off the output). No-op for non-MXFP8 and already-aligned batches. Verified on 8x B300 SXM6 with Mixtral-8x7B EP=8 at SEQ=8192: FP8 1.196 s/step, MXFP8 1.248 s/step. - FSDP2 checkpointing uses DCP format (.distcp files), covered by dedicated distributed checkpointing tests. - CI-robust tests: session-scoped local WordLevel tokenizer fixture avoids HuggingFace Hub dependency; expanded train coverage (7 single-GPU, 4 two-GPU tests per recipe) plus dataset and distributed checkpoint tests. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Signed-off-by: Timur Rvachov <trvachov@nvidia.com>
160a754 to
63f58d3
Compare
AsherBond
pushed a commit
to Distillative-AI/bionemo-framework
that referenced
this pull request
Apr 21, 2026
…VIDIA#1556) ## Problem The gitleaks pre-commit hook is silently passing in CI, even when secrets are present. See [NVIDIA#1551](NVIDIA#1551) which includes a hardcoded `WANDB_API_KEY` that gitleaks did not flag. **Root cause:** The default gitleaks hook entry is: ``` gitleaks git --pre-commit --redact --staged --verbose ``` This scans **staged git changes** — it works during an actual `git commit`. But in CI, `static_checks.sh` runs: ``` pre-commit run --all-files ``` With `--all-files`, there are no staged files and no commit context, so gitleaks scans **0 commits** and reports "no leaks found": ``` 7:02PM INF 0 commits scanned. 7:02PM INF scanned ~0 bytes (0) in 28.9ms 7:02PM INF no leaks found ``` ## Fix Override the hook entry to use `gitleaks dir --redact --verbose`, which scans **file contents** directly. This works correctly both: - Locally during `git commit` (pre-commit hook) - In CI with `pre-commit run --all-files` ## Testing After this change, running `pre-commit run gitleaks --all-files` on the repo will scan actual file contents instead of scanning 0 commits. --------- Signed-off-by: svc-bionemo <267129667+svc-bionemo@users.noreply.github.com> Signed-off-by: Peter St. John <pstjohn@nvidia.com> Co-authored-by: svc-bionemo <267129667+svc-bionemo@users.noreply.github.com> Co-authored-by: Peter St. John <pstjohn@nvidia.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
Adds two new self-contained training recipes for MoE (Mixture-of-Experts) models following the existing Llama3/ESM2 recipe patterns. Out of scope: Complete README with benchmarks -- that is work in progress for an upcoming PR.
New recipes:
nucleotide tokenizer packaged with recipe).
Key design decisions:
repo convention.
B300 with 8x7B EP=8 @ SEQ=8192.
Usage
On Blackwell, MXFP8 training, 8x7b
torchrun --standalone --nproc_per_node=8 train_fsdp2.py \ --config-name defaults \ \ # --- Model --- config_name_or_path=./model_configs/mixtral-8x7B \ +config_kwargs.attn_input_format=thd \ +config_kwargs.self_attn_mask_type=padding_causal \ +config_kwargs.max_position_embeddings=8192 \ \ # --- Precision: MXFP8 block scaling --- fp8_config.enabled=true \ fp8_config.fp8_recipe=transformer_engine.common.recipe.MXFP8BlockScaling \ \ # --- Parallelism: pure EP=8 across the 8 ranks --- expert_parallel_size=8 \ token_dispatcher=alltoall \ \ # --- THD sequence packing (max throughput on variable-length data) --- use_sequence_packing=true \ use_meta_device=true \ \ # --- Data --- dataset.tokenizer_name_or_path=/path/to/tokenizer \ ~dataset.micro_batch_size \ +dataset.token_micro_batch_size=16384 \ dataset.max_seq_length=8192 \ dataset.stride=64 \ dataset.pad_sequences_to_be_divisible_by=32 \ dataset.load_dataset_kwargs.path=parquet \ +dataset.load_dataset_kwargs.data_files=your_data.parquet \ \ # --- Training loop --- num_train_steps=500 \ logger.frequency=5 \ \ # --- WandB --- wandb.project=mixtral-benchmark-sweep \ wandb.name=te-mxfp8-thd-max \ \ # --- Checkpointing disabled for benchmarks --- checkpoint.ckpt_dir=null \ checkpoint.save_final_model=falseType of changes
CI Pipeline Configuration
Configure CI behavior by applying the relevant labels. By default, only basic unit tests are run.
Unit tests marked as
@pytest.mark.multi_gpuor@pytest.mark.distributedare not run in the PR pipeline.For more details, see CONTRIBUTING
Note
By default, only basic unit tests are run. Add appropriate labels to enable an additional test coverage.
Authorizing CI Runs
We use copy-pr-bot to manage authorization of CI
runs on NVIDIA's compute resources.
automatically be copied to a pull-request/ prefixed branch in the source repository (e.g. pull-request/123)
/ok to testcomment on the pull request to trigger CI. This will need to be done for each new commit.Triggering Code Rabbit AI Review
To trigger a code review from code rabbit, comment on a pull request with one of these commands:
See https://docs.coderabbit.ai/reference/review-commands for a full list of commands.
Pre-submit Checklist