Skip to content

NERSC adapter: custom_attributes not translated to sbatch flags #50

@osmiumzero

Description

@osmiumzero

Summary

The PSI/J JobAttributes.custom_attributes field accepts key-value pairs but the NERSC adapter does not translate them to corresponding sbatch flags. This prevents users from passing Slurm-specific options like -C gpu (constraint).

Reproduction

POST /api/v1/compute/job/6d00f875-dfc1-4a41-9309-456c5f2048df
{
    "executable": "/path/to/script.sh",
    "resources": {"node_count": 1, "gpu_cores_per_process": 4},
    "attributes": {
        "queue_name": "regular",
        "account": "m3792_g",
        "duration": 5400,
        "custom_attributes": {"constraint": "gpu"}
    }
}

Expected: Job submitted with sbatch -C gpu
Actual: Constraint not passed to sbatch. The custom_attributes field has no effect.

Tested key formats

Both of these were tested; neither produces any sbatch flags:

  • {"constraint": "gpu"} — no effect
  • {"slurm.constraint": "gpu"} — no effect (PSI/J convention for Slurm-specific attributes)

Impact

On Perlmutter, GPU jobs require -C gpu when using user-facing QOS names like regular. Without the constraint, Slurm routes to the CPU QOS variant which has different wall time limits.

This is partially masked by #49 (API forces gpu_debug QOS which has GPU constraint built into the QOS definition), but will block any other custom sbatch flags users need to pass.

Note: gpu_cores_per_process in ResourceSpec also does NOT automatically add -C gpu to sbatch — this might be expected behavior, but worth noting since GPU resources without GPU constraints will fail.

Environment

  • NERSC Perlmutter
  • API: https://api.iri.nersc.gov/api/v1
  • Date: 2026-03-03

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions