You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: devops/scripts/benchmarks/CONTRIB.md
+42-3Lines changed: 42 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,11 +18,10 @@ The suite is structured around four main components: Suites, Benchmarks, Results
18
18
* Represent a single benchmark, usually mapping to a binary execution.
19
19
* Must implement the `Benchmark` base class (`benches/base.py`).
20
20
***Required Methods:**
21
-
*`setup()`: Initializes the benchmark (e.g., build, download data). Use `self.download()` for data dependencies. **Do not** perform setup in `__init__`.
22
21
*`run(env_vars)`: Executes the benchmark binary (use `self.run_bench()`) and returns a list of `Result` objects. Can be called multiple times, must produce consistent results.
23
-
*`teardown()`: Cleans up resources. Can be empty. No need to remove build artifacts or downloaded datasets.
24
22
*`name()`: Returns a unique identifier string for the benchmark across *all* suites. If a benchmark class is instantiated multiple times with different parameters (e.g., "Submit In Order", "Submit Out Of Order"), the `name()` must reflect this uniqueness.
25
23
***Optional Methods:**
24
+
*`setup()`: Initializes the benchmark (e.g., build, download data). Use `self.download()` for data dependencies. **Do not** perform setup in `__init__`.
26
25
*`lower_is_better()`: Returns `True` if lower result values are better (default: `True`).
27
26
*`description()`: Provides a short description about the benchmark.
28
27
*`notes()`: Provides additional commentary about the benchmark results (string).
@@ -163,9 +162,49 @@ The benchmark suite generates an interactive HTML dashboard that visualizes `Res
163
162
**Stability:**
164
163
* Mark unstable benchmarks with `metadata.unstable` to hide them by default.
165
164
165
+
## Code Style Guidelines
166
+
167
+
### Benchmark Class Structure
168
+
169
+
When creating benchmark classes, follow this consistent structure pattern:
170
+
171
+
**1. Constructor (`__init__`):**
172
+
* Assign all parameters to protected (prefixed with `_`) or private (prefixed with `__`) instance variables.
173
+
* Set `self._iterations_regular` and `self._iterations_trace` BEFORE calling `super().__init__()` (required for subclasses of `ComputeBenchmark`).
174
+
175
+
**2. Method Order:**
176
+
* Align with methods order as in the abstract base class `Benchmark`. Not all of them are required, but follow the order for consistency.
177
+
* Public methods first, then protected, then private.
178
+
179
+
### Naming Conventions
180
+
181
+
**Method Return Values:**
182
+
*`name()`: Unique identifier with underscores, lowercase, includes all distinguishing parameters
183
+
* Example: `"api_overhead_benchmark_sycl SubmitKernel in order with measure completion"`
184
+
*`display_name()`: User-friendly, uses proper capitalization, commas for readability, used for charts titles
185
+
* Example: `"SYCL SubmitKernel in order, with measure completion, NumKernels 10"`
186
+
187
+
**Class method names and variables should follow PEP 8 guidelines.**
188
+
* Use lowercase with underscores for method names and variables.
189
+
* Use single underscores prefixes for protected variables/methods and double underscores for private variables/methods.
190
+
191
+
### Description Writing
192
+
193
+
Descriptions should:
194
+
* Clearly state what is being measured
195
+
* Include key parameters and their values
196
+
* Explain the purpose or what the benchmark tests
197
+
* Be 1-3 sentences, clear and concise
198
+
* If not needed, can be omitted
199
+
200
+
### Tag Selection
201
+
202
+
* Use predefined tags from `benches/base.py` when available
203
+
* Tags should be lowercase, descriptive, single words
204
+
166
205
## Adding New Benchmarks
167
206
168
-
1.**Create Benchmark Class:** Implement a new class inheriting from `benches.base.Benchmark`. Implement required methods (`setup`, `run`, `teardown`, `name`) and optional ones (`description`, `get_tags`, etc.) as needed.
207
+
1.**Create Benchmark Class:** Implement a new class inheriting from `benches.base.Benchmark`. Implement required methods (`run`, `name`) and optional ones (`description`, `get_tags`, etc.) as needed. Follow the code style guidelines above.
169
208
2.**Add to Suite:**
170
209
* If adding to an existing category, modify the corresponding `Suite` class (e.g., `benches/compute.py`) to instantiate and return your new benchmark in its `benchmarks()` method.
171
210
* If creating a new category, create a new `Suite` class inheriting from `benches.base.Suite`. Implement `name()` and `benchmarks()`. Add necessary `setup()` if the suite requires shared setup. Add group metadata via `additional_metadata()` if needed.
0 commit comments