Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 9 additions & 5 deletions loadgen/README_BUILD.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,18 +3,22 @@
## Prerequisites

sudo apt-get install libglib2.0-dev python-pip python3-pip
pip2 install absl-py numpy
pip3 install absl-py numpy

## Quick Start
### Installation - Python
If you need to clone the repo (e.g., because you are a MLPerf Inference developer), you
can build and install the `mlperf-loadgen` package via:

pip install absl-py numpy
git clone --recurse-submodules https://github.com/mlcommons/inference.git mlperf_inference
cd mlperf_inference/loadgen
CFLAGS="-std=c++14 -O3" python -m pip install .
pip install -e . # Install in the editable mode because you are a developer.

This will fetch the loadgen source, build and install the loadgen as a python module, and run a simple end-to-end demo.
If you don't need to clone the repo (e.g., you just want to install `mlperf-loadgen`
from the latest commit of the `master` branch):

pip install git+https://github.com/mlcommons/inference.git#subdirectory=loadgen

This will fetch the loadgen source, then build and install the loadgen as a python module.

Alternatively, we provide wheels for several python versions and operating system that can be installed using pip directly.

Expand Down
2 changes: 2 additions & 0 deletions loadgen/requirements.txt
Original file line number Diff line number Diff line change
@@ -1 +1,3 @@
pybind11
absl-py
numpy
57 changes: 34 additions & 23 deletions loadgen/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,12 @@
# and binaries. Use one of the gn build targets instead if you want
# to avoid poluting the source tree.

from setuptools import Extension, setup
from pathlib import Path

from pybind11 import get_include
from pybind11.setup_helpers import Pybind11Extension, build_ext
from setuptools import setup
from version_generator import generate_loadgen_version_definitions
import subprocess

generated_version_source_filename = "generated/version_generated.cc"
generate_loadgen_version_definitions(generated_version_source_filename, ".")
Expand All @@ -42,7 +42,7 @@
"test_settings.h",
"issue_query_controller.h",
"early_stopping.h",
"query_dispatch_library.h"
"query_dispatch_library.h",
]

lib_headers = [
Expand All @@ -54,7 +54,7 @@
"results.h",
"bindings/c_api.h",
"version_generator.py",
"mlperf_conf.h"
"mlperf_conf.h",
]

lib_sources = [
Expand Down Expand Up @@ -91,20 +91,29 @@
if len(version_split) < 2:
print("Version is incomplete. Needs a format like 4.1.1 in VERSION file")

# Read requirements from requirements.txt
install_requires = []
requirements_file = this_directory / "requirements.txt"
if requirements_file.exists():
with open(requirements_file, "r") as f:
install_requires = [
line.strip() for line in f if line.strip() and not line.startswith("#")
]


try:
with open("mlperf.conf", 'r') as file:
with open("mlperf.conf", "r") as file:
conf_contents = file.read()

# Escape backslashes and double quotes
conf_contents = conf_contents.replace('\\', '\\\\').replace('"', '\\"')
conf_contents = conf_contents.replace("\\", "\\\\").replace('"', '\\"')

# Convert newlines
conf_contents = conf_contents.replace('\n', '\\n"\n"')
conf_contents = conf_contents.replace("\n", '\\n"\n"')

formatted_content = f'const char* mlperf_conf =\n"{conf_contents}";\n'

with open("mlperf_conf.h", 'w') as header_file:
with open("mlperf_conf.h", "w") as header_file:
header_file.write(formatted_content)

except IOError as e:
Expand All @@ -113,24 +122,26 @@
mlperf_loadgen_module = Pybind11Extension(
"mlperf_loadgen",
define_macros=[
("MAJOR_VERSION",
version_split[0]),
("MINOR_VERSION",
version_split[1])
("MAJOR_VERSION", version_split[0]),
("MINOR_VERSION", version_split[1]),
],
include_dirs=[".", get_include()],
sources=mlperf_loadgen_sources,
depends=mlperf_loadgen_headers,
extra_compile_args=["-std=c++14", "-O3"],
)

setup(name="mlcommons_loadgen",
version=version,
description="MLPerf Inference LoadGen python bindings",
url="https://mlcommons.org/",
cmdclass={"build_ext": build_ext},
ext_modules=[mlperf_loadgen_module],
packages=['mlcommons_loadgen'],
package_dir={'mlcommons_loadgen': '.'},
include_package_data=True,
long_description=mlperf_long_description,
long_description_content_type='text/markdown')
setup(
name="mlcommons_loadgen",
version=version,
description="MLPerf Inference LoadGen python bindings",
url="https://mlcommons.org/",
cmdclass={"build_ext": build_ext},
ext_modules=[mlperf_loadgen_module],
packages=["mlcommons_loadgen"],
package_dir={"mlcommons_loadgen": "."},
include_package_data=True,
install_requires=install_requires,
long_description=mlperf_long_description,
long_description_content_type="text/markdown",
)
53 changes: 9 additions & 44 deletions multimodal/vl2l/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,6 @@

## Quick Start

### Get the source code

Clone the MLPerf Inference repo via:

```bash
git clone --recurse-submodules https://github.com/mlcommons/inference.git mlperf-inference
```

Then enter the repo:

```bash
cd mlperf-inference/
```

### Create a Conda environment

Follow [this link](https://www.anaconda.com/docs/getting-started/miniconda/install#quickstart-install-instructions)
Expand All @@ -26,53 +12,32 @@ environment via:
conda create -n mlperf-inf-mm-vl2l python=3.12
```

### Install LoadGen

Update `libstdc++` in the conda environment:
### (Optionally) Update `libstdc++` in the conda environment:

```bash
conda install -c conda-forge libstdcxx-ng
```

Install `absl-py` and `numpy`:

```bash
conda install absl-py numpy
```

Build and install LoadGen from source:

```bash
cd loadgen/
CFLAGS="-std=c++14 -O3" python -m pip install .
cd ../
```

Run a quick test to validate that LoadGen was installed correctly:

```bash
python loadgen/demos/token_metrics/py_demo_server.py
```
This is only needed when your local environment doesn't have `libstdc++`.

### Install the VL2L benchmarking CLI

For users, install `mlperf-inf-mm-vl2l` with:

```bash
pip install multimodal/vl2l/
pip install git+https://github.com/mlcommons/inference.git#subdirectory=multimodal/vl2l
```

For developers, install `mlperf-inf-mm-vl2l` and the development tools with:

- On Bash
1. Clone the MLPerf Inference repo.
```bash
pip install multimodal/vl2l/[dev]
```
- On Zsh
```zsh
pip install multimodal/vl2l/"[dev]"
git clone --recurse-submodules https://github.com/mlcommons/inference.git mlperf-inference
```

2. Install in editable mode with the development tools.
- Bash: `pip install -e mlperf-inference/multimodal/vl2l/[dev]`
- Zsh: `pip install -e mlperf-inference/multimodal/vl2l/"[dev]"`

After installation, you can check the CLI flags that `mlperf-inf-mm-vl2l` can take with:

```bash
Expand Down
2 changes: 1 addition & 1 deletion multimodal/vl2l/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ dependencies = [
"datasets",
"loguru",
"matplotlib",
"mlcommons_loadgen",
"mlcommons_loadgen @ git+https://github.com/mlcommons/inference.git#subdirectory=loadgen",
"openai[aiohttp]",
"pydantic",
"pydantic-typer @ git+https://github.com/CentML/pydantic-typer.git@wangshangsam/preserve-full-annotated-type",
Expand Down