Skip to content

Conversation

@jiridanek
Copy link
Member

@jiridanek jiridanek commented Nov 25, 2025

https://issues.redhat.com/browse/RHAIENG-2101
https://issues.redhat.com/browse/RHAIENG-2111

Description

How Has This Been Tested?

Self checklist (all need to be checked):

  • Ensure that you have run make test (gmake on macOS) before asking for review
  • Changes to everything except Dockerfile.konflux files should be done in odh/notebooks and automatically synced to rhds/notebooks. For Konflux-specific changes, modify Dockerfile.konflux files directly in rhds/notebooks as these require special attention in the downstream repository and flow to the upcoming RHOAI release.

Merge criteria:

  • The commits are squashed in a cohesive manner and have meaningful messages.
  • Testing instructions have been added in the PR body (for PRs involving changes that are not immediately obvious).
  • The developer has manually tested the changes and verified that the changes work

Summary by CodeRabbit

  • Chores
    • Added extra PyPI index settings to improve package resolution during builds.
    • Introduced UV cache controls (UV_NO_CACHE, UV_LINK_MODE) for more predictable installs.
    • Expanded and clarified package selections for additional architectures (ppc64le, s390x) to improve build coverage.
    • Enabled CRB/repo cleanup steps and refined installer flags to make development images more reliable.
    • Minor formatting and comment cleanups for clarity.

✏️ Tip: You can customize this high-level summary in your review settings.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 25, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 25, 2025

Walkthrough

Multiple build and development scripts were updated: the devel_env_setup.sh replaces gcc-toolset-13 with standard gcc/g++, expands architecture-specific package lists and CRB handling; several UBI9 Python 3.12 Dockerfiles and runtime scripts add PIP/UV extra-index URLs, adjust uv pip invocations/flags, and introduce package-array and architecture branching refactors.

Changes

Cohort / File(s) Summary
Dev environment script
jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh
Remove gcc-toolset-13, add upfront gcc/gcc-g++ installation, add UV cache/env controls (UV_NO_CACHE, UV_LINK_MODE), enable CRB for ppc64le/s390x, expand architecture-specific devel packages (zlib, libjpeg, openssl, gcc-gfortran, unixODBC, etc.), add explanatory comments, and trim trailing whitespace on an Apache Arrow git clone.
Codeserver Docker (cpu)
codeserver/ubi9-python-3.12/Dockerfile.cpu
Add env vars PIP_EXTRA_INDEX_URL and UV_EXTRA_INDEX_URL; change uv pip install from offline to online (remove --offline), keep cache-dir; minor comment text fix about debugpy wheel.
Jupyter datascience Docker (cpu)
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu
Treat s390x the same as ppc64le in two build-stage conditionals so both architectures follow the same package/build branches.
Jupyter minimal Docker (cpu)
jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu
Add PIP_EXTRA_INDEX_URL and UV_EXTRA_INDEX_URL env vars in cpu-base stage (RHAIENG-2189 comment).
Jupyter trustyai Docker (cpu)
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu
Add extra-index URL to several uv pip install commands; switch many installs from offline to online and add stricter flags (--strict, --verify-hashes, --no-deps, etc.); set UV_NO_CACHE=false and UV_LINK_MODE=copy for debugpy install; add contextual comment.
Runtimes datascience Docker (cpu)
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu
Introduce PIP_EXTRA_INDEX_URL and UV_EXTRA_INDEX_URL; refactor package installation to use a PACKAGES array with conditional PACKAGES+= for ppc64le/s390x; update dnf install to use array expansion; fix CMAKE_PREFIX_PATH export to avoid leading/trailing colons.
Runtimes minimal Docker (cpu)
runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu
Add PIP_EXTRA_INDEX_URL and UV_EXTRA_INDEX_URL env vars in cpu-base stage with comment.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20–30 minutes

  • Heterogeneous edits across shell script and multiple Dockerfiles (env var additions, package-array refactor, architecture branching, uv pip flag changes).
  • Areas needing extra attention:
    • Architecture-specific package lists and CRB enablement in devel_env_setup.sh (ppc64le/s390x correctness).
    • uv pip install flag changes and extra-index usage across Dockerfiles (package resolution and caching behavior).
    • PACKAGES array refactor and dnf invocation in runtimes/datascience (word-splitting and array expansion correctness).
    • CMAKE_PREFIX_PATH assembly to ensure no path ordering/formatting regressions.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning PR description is largely incomplete with empty placeholder sections and insufficient detail about the changes made. Add detailed description of changes, testing methodology, and complete all merge criteria checkboxes. Describe what was modified and why (e.g., g++ installation, UV caching, s390x support).
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title check ✅ Passed The title references RHAIENG-2101 and RHAIENG-2111, indicating it addresses build fixes, which aligns with the comprehensive build-related changes across multiple Dockerfile and shell script files.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added the review-requested GitHub Bot creates notification on #pr-review-ai-ide-team slack channel label Nov 25, 2025
@openshift-ci openshift-ci bot added the size/s label Nov 25, 2025
@jiridanek jiridanek marked this pull request as ready for review November 25, 2025 09:51
@openshift-ci openshift-ci bot added size/s and removed size/s labels Nov 25, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6c5ecb1 and c015f91.

📒 Files selected for processing (1)
  • jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
  • GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
🔇 Additional comments (1)
jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh (1)

132-132: Minor cleanup: trailing whitespace removal.

The trailing space in the git clone command for apache-arrow has been cleanly removed, improving consistency.

Comment on lines +21 to +27
# compiling jpype1==1.5.0 requires g++ and this gets compiled on all platforms
# gcc and g++ is present by default on registry.access.redhat.com/ubi9/python-312:latest
dnf install -y --setopt=keepcache=1 gcc gcc-g++
Copy link
Contributor

@coderabbitai coderabbitai bot Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix incorrect package name: use gcc-c++ instead of gcc-g++.

The package gcc-g++ does not exist in the UBI9 repository. The correct package name for the C++ compiler is gcc-c++. This will cause the dnf install command to fail.

This inconsistency is evident when comparing with line 41, where the correct gcc-c++ package is used for s390x.

Apply this diff to fix the package name:

-dnf install -y --setopt=keepcache=1 gcc gcc-g++
+dnf install -y --setopt=keepcache=1 gcc gcc-c++
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# compiling jpype1==1.5.0 requires g++ and this gets compiled on all platforms
# gcc and g++ is present by default on registry.access.redhat.com/ubi9/python-312:latest
dnf install -y --setopt=keepcache=1 gcc gcc-g++
# compiling jpype1==1.5.0 requires g++ and this gets compiled on all platforms
# gcc and g++ is present by default on registry.access.redhat.com/ubi9/python-312:latest
dnf install -y --setopt=keepcache=1 gcc gcc-c++
🤖 Prompt for AI Agents
In jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh around lines 21 to 23,
the dnf install uses the non-existent package name gcc-g++; replace it with the
correct UBI9 package name gcc-c++ so the C++ compiler installs correctly (make
the same change as used on line 41), and ensure the dnf command remains
otherwise unchanged.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai think it through again; either works

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if both gcc-c++ and gcc-g++ packages exist in UBI9 repositories
# We'll use dnf to query package information

# First, let's check what packages provide g++ in UBI9
echo "=== Checking gcc-c++ package ==="
dnf info gcc-c++ 2>&1 || echo "gcc-c++ query failed"

echo ""
echo "=== Checking gcc-g++ package ==="
dnf info gcc-g++ 2>&1 || echo "gcc-g++ query failed"

echo ""
echo "=== Checking what provides g++ ==="
dnf provides g++ 2>&1 || echo "g++ provides query failed"

echo ""
echo "=== Searching for gcc related packages ==="
dnf search gcc | grep -E "(gcc-c\+\+|gcc-g\+\+)" || echo "search completed"

Length of output: 402


🌐 Web query:

UBI9 RHEL9 gcc-g++ package dnf install

💡 Result:

Short answer: install the RHEL/UBI package named gcc-c++ (it provides g++):

Commands (run as root inside a UBI9 / RHEL9 environment)

  • dnf install -y gcc-c++
  • Verify: g++ --version

If you want a newer toolset instead of system gcc, use Red Hat's GCC Toolset container or packages (e.g. gcc-toolset-14 / gcc-toolset-13) and enable/run it via scl or the toolset container. See Red Hat docs for the toolset options. [1][2][3]

Sources:

  • UBI9 package repository showing gcc-c++ package. [1]
  • Red Hat docs: GCC Toolset (gcc-toolset-N providing gcc-c++). [2]
  • Red Hat UBI9 description / images. [3]

🌐 Web query:

"gcc-g++" package RHEL dnf

💡 Result:

There is no package named "gcc-g++" on RHEL. The C++ compiler (g++) is provided by the gcc-c++ package (or by the "Development Tools" group). Example commands:

  • Install just g++: sudo dnf install gcc-c++
  • Install full dev toolchain: sudo dnf groupinstall "Development Tools"
  • To find the package that supplies g++ on your system: dnf provides '*bin/g++'

Sources: Red Hat documentation and community guidance. [1][2]

References
[1] Red Hat Developer — Install GCC / Development Tools on RHEL.
[2] Unix StackExchange / community answers showing gcc-c++ provides /usr/bin/g++.


🌐 Web query:

"gcc-g++" alternative package name alias RHEL UBI

💡 Result:

Use the rpm named gcc-c++ (provides the g++ C++ compiler). [1][2]

Sources:

  • UBI AppStream packages index showing gcc-c++ RPM. [1]
  • Red Hat docs on GCC packaging/versions (RHEL uses gcc / gcc-c++). [2]

🌐 Web query:

UBI9 virtual package provides gcc-g++

💡 Result:

Short answer: No — UBI 9 doesn't use a "gcc-g++" virtual package. The C++ compiler is provided by the gcc-c++ (or gcc-c++/gcc‑11… rpm) packages in the UBI 9 AppStream/BaseOS repos (e.g., gcc-c++ and gcc packages are present in the UBI9 repo). [1]

Sources:

  • UBI 9 AppStream package index (shows gcc and gcc-c++ RPMs). [1]

🌐 Web query:

dnf install typo correction "did you mean" package suggestions

💡 Result:

Yes — install the PackageKit command-not-found handler. It provides “did you mean” / offer-to-install suggestions (and with dnf5 can auto-install and rerun the command).

Commands:

  • sudo dnf install PackageKit-command-not-found
  • Log out/in (or source the shell file): source /etc/profile.d/PackageKit.sh

References: Fedora feature/docs for PackageKit command-not-found [1], the PackageKit package page [2], and a writeup showing dnf5 + command‑not‑found behavior [3].


🌐 Web query:

dnf automatic package name correction typo fix install

💡 Result:

If you mean the DNF automatic-updates feature: install the dnf-automatic package (or the dnf5 plugin on systems using dnf5), enable its timer, and edit the config.

Commands:

  • Fedora (dnf):
    sudo dnf install dnf-automatic
    sudo systemctl enable --now dnf-automatic.timer
  • Systems with dnf5 (Fedora newer releases):
    sudo dnf install dnf5-plugin-automatic
    sudo systemctl enable --now dnf-automatic.timer

Edit /etc/dnf/automatic.conf to choose download-only, apply_updates=True, reboot behavior, etc. Check status with:
systemctl status dnf-automatic.timer
systemctl list-timers --all | grep dnf-automatic

Sources: Fedora DNF docs and dnf-automatic package page. [1][2][3]

Sources
[1] Using the DNF software package manager — Fedora Docs. [2] dnf-automatic — Fedora Packages. [3] Automatic Updates — Fedora Docs.


🏁 Script executed:

#!/bin/bash
# Let's check the actual file content to see if gcc-g++ is used elsewhere
# and look at the context around line 41 mentioned in my original review

echo "=== Searching for gcc-c++ usage in the repository ==="
rg -n "gcc-c\+\+" --type sh

echo ""
echo "=== Searching for gcc-g++ usage in the repository ==="
rg -n "gcc-g\+\+" --type sh

echo ""
echo "=== Looking at the context around line 41 in the file ==="
sed -n '35,50p' jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh

Length of output: 1655


@jiridanek, I've investigated this further and found some evidence that suggests gcc-g++ may not be a valid package name:

Evidence from documentation:

  • Multiple sources confirm that the C++ compiler (g++) is provided by the gcc-c++ package in UBI9/RHEL9
  • No "gcc-g++" virtual package exists in UBI 9

Evidence from this codebase:

  • Line 53 in this same file uses gcc-c++ for the s390x architecture
  • Other files in the repository consistently use gcc-c++:
    • base-images/utils/aipcc.sh:144
    • codeserver/ubi9-python-3.12/devel_env_setup.sh:69
  • Only line 27 (the new code in this PR) uses gcc-g++

This inconsistency within the same file is notable. If gcc-g++ were valid, it would be unusual for line 53 to use a different package name.

Could you verify which package name actually succeeds in your testing environment? If gcc-g++ does work in your tests, I'd be interested to understand the mechanism, as it doesn't appear to be a standard package name in the UBI9 repositories.

That said, I defer to your testing results—if you've confirmed both work, perhaps there's a package alias or DNF behavior I'm not aware of.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai create github issue to fix this later

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Create a GitHub issue to track the gcc package naming inconsistency

gh issue create \
  --repo opendatahub-io/notebooks \
  --title "Inconsistent gcc C++ compiler package names in devel_env_setup.sh" \
  --body "## Description

There is an inconsistency in the package names used for installing the C++ compiler in \`jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh\`:

- Line 23 uses \`gcc-g++\`
- Line 53 uses \`gcc-c++\`

While both package names appear to work in testing, \`gcc-c++\` is the standard package name in UBI9/RHEL9 repositories and is used consistently throughout the rest of the codebase.

## Files affected

- \`jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh\`

## Context

This inconsistency was identified during review of PR #2719. The issue is deferred for later resolution to avoid blocking the current PR.

## References

- PR: https://github.com/opendatahub-io/notebooks/pull/2719
- Discussion: https://github.com/opendatahub-io/notebooks/pull/2719#discussion_r2559331988
- Requested by: @jiridanek" \
  --assignee jiridanek

Length of output: 55


@jiridanek, I've created issue #2755 to track this inconsistency: #2755

The issue documents the difference between gcc-g++ (line 23) and gcc-c++ (line 53) usage in the file, and includes references back to this PR and discussion.

… left on device" due to AIPCC base disabling `uv` cache

(cherry picked from commit f270c22)
…vice" due to AIPCC base disabling `uv` cache

(cherry picked from commit 0f5b57e)
…her python requirements

```
The headers or library files could not be found for jpeg,
a required dependency when compiling Pillow from source.
Please see the install instructions at:
   https://pillow.readthedocs.io/en/latest/installation/basic-installation.html
  × Failed to build `/tmp/tmp.mHgTyZTiW2/Pillow`
  ├─▶ The build backend returned an error
  ╰─▶ Call to `backend.build_wheel` failed (exit status: 1)
      hint: This usually indicates a problem with the package or the build
      environment.
subprocess exited with status 2
```

```
            Could not find directory of OpenSSL installation, and this `-sys`
          crate cannot
            proceed without this knowledge. If OpenSSL is installed and this crate
          had
            trouble finding it,  you can set the `OPENSSL_DIR` environment
          variable for the
            compilation process.

            Make sure you also have the development packages of openssl installed.
            For example, `libssl-dev` on Ubuntu or `openssl-devel` on Fedora.

            If you're in a situation where you think the directory *should* be
          found
            automatically, please open a bug at
          https://github.com/sfackler/rust-openssl
            and include information about your system as well as this message.
...
        warning: build failed, waiting for other jobs to finish...
        💥 maturin failed
          Caused by: Failed to build a native library through cargo
          Caused by: Cargo build finished with "exit status:
        101": `env -u CARGO PYO3_BUILD_EXTENSION_MODULE="1"
        PYO3_ENVIRONMENT_SIGNATURE="cpython-3.12-64bit"
        PYO3_PYTHON="/root/.cache/uv/builds-v0/.tmpsjsmaI/bin/python"
        PYTHON_SYS_EXECUTABLE="/root/.cache/uv/builds-v0/.tmpsjsmaI/bin/python"
        "cargo" "rustc" "--profile" "release" "--features" "pyo3/abi3-py37"
        "--message-format" "json-render-diagnostics" "--locked" "--manifest-path"
        "/root/.cache/uv/sdists-v9/index/c2c553577285dd6f/cryptography/43.0.3/0S5Vw6G2gpWxT0EJkPki9/src/src/rust/Cargo.toml"
        "--lib"`
        Error: command ['maturin', 'pep517', 'build-wheel', '-i',
        '/root/.cache/uv/builds-v0/.tmpsjsmaI/bin/python', '--compatibility',
        'off'] returned non-zero exit status 1

        hint: This usually indicates a problem with the package or the build
        environment.
  subprocess exited with status 1
```

```
+ uv pip install /wheelsdir/pillow-11.3.0-cp312-cp312-manylinux_2_34_ppc64le.whl /wheelsdir/pyarrow-20.0.0-cp312-cp312-linux_ppc64le.whl /wheelsdir/torch-2.7.1-cp312-cp312-linux_ppc64le.whl accelerate --cache-dir /root/.cache/uv
...
  × No solution found when resolving dependencies:
  ╰─▶ Because numpy==2.3.4 has no wheels with a matching platform tag (e.g.,
      `manylinux_2_34_ppc64le`) and only numpy==2.3.4 is available, we can
      conclude that numpy==2.3.4 cannot be used.
      And because accelerate==1.10.1 depends on numpy==2.3.4 and only the
      following versions of accelerate are available:
          accelerate==1.10.1
          accelerate==1.11.0
      we can conclude that accelerate<1.11.0 cannot be used. (1)
      Because numpy==2.3.4 has no wheels with a matching platform tag (e.g.,
      `manylinux_2_34_ppc64le`) and only numpy==2.3.4 is available, we can
      conclude that numpy==2.3.4 cannot be used.
      And because accelerate==1.11.0 depends on numpy==2.3.4, we can conclude
      that accelerate==1.11.0 cannot be used.
      And because we know from (1) that accelerate<1.11.0 cannot be used, we
      can conclude that all versions of accelerate cannot be used.
      And because you require accelerate, we can conclude that your
      requirements are unsatisfiable.
      hint: Wheels are available for `numpy` (v2.3.4) on the following
      platforms: `linux_aarch64`, `linux_x86_64`
subprocess exited with status 1
```

(cherry picked from commit 6f83dd5)
…on ppc64le it reports missing packages

```
  + uv pip install --offline --cache-dir /root/.cache/uv --extra-index-url https://pypi.org/simple --requirements=./pylock.toml
  warning: The `--pylock` option is experimental and may change without warning. Pass `--preview-features pylock` to disable this warning.
  Using Python 3.12.11 environment at: /opt/app-root
    × Failed to download `wheel==0.45.1`
    ╰─▶ Network connectivity is disabled, but the
        requested data wasn't found in the cache for:
        `https://files.pythonhosted.org/packages/0b/2c/87f3254fd8ffd29e4c02732eee68a83a1d3c346ae39bc6822dcbcb697f2b/wheel-0.45.1-py3-none-any.whl`
  subprocess exited with status 1
```

(cherry picked from commit fc72836)
…ined variable

```
++ pwd
+ ARROW_HOME=/root/arrow/dist
+ export ARROW_HOME
++ pwd
+ LD_LIBRARY_PATH=/root/arrow/dist/lib
+ export LD_LIBRARY_PATH
/bin/bash: line 10: CMAKE_PREFIX_PATH: unbound variable
subprocess exited with status 1
```

(cherry picked from commit b2cd7f7)
… it causes build failure on ppc64le

```
+ uv pip install --offline --cache-dir /root/.cache/uv --requirements=./pylock.toml
warning: The `--pylock` option is experimental and may change without warning. Pass `--preview-features pylock` to disable this warning.
Using Python 3.12.9 environment at: /opt/app-root
  × Failed to download `setuptools==80.9.0`
  ╰─▶ Network connectivity is disabled, but the
      requested data wasn't found in the cache for:
      `https://files.pythonhosted.org/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl`
subprocess exited with status 1
```

(cherry picked from commit e5ee8cb)
@jiridanek jiridanek marked this pull request as ready for review December 4, 2025 13:44
@jiridanek jiridanek changed the title RHAIENG-2111: chore(trustyai): podman build in rhds: error: command 'g++' failed: No such file or directory RHAIENG-2101, RHAIENG-2111: chore(build): fix builds in 3.2 Dec 4, 2025
@openshift-ci openshift-ci bot added size/xxl and removed size/xxl labels Dec 4, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
codeserver/ubi9-python-3.12/Dockerfile.cpu (1)

299-310: Update stale comment on line 302.

The inline comment states "we can ensure wheels are consumed from the cache only by restricting internet access for uv install with '--offline' flag", but line 303 actually removes the --offline flag to enable online access to PyPI. This comment no longer reflects the current behavior and is misleading for future maintainers.

Update the comment to clarify that --offline was removed to enable downloading build dependencies (e.g., cython) from PyPI while still leveraging the cache where possible via UV_LINK_MODE=copy and --cache-dir.

jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu (1)

222-230: Update stale comment on line 225.

The inline comment states "we can ensure wheels are consumed from the cache only by restricting internet access for uv install with '--offline' flag", but line 227 now removes --offline to enable downloading build dependencies from PyPI (as documented in the RHAIENG-2111 comment above). This comment is misleading and should be updated to reflect the new online-with-cache strategy.

♻️ Duplicate comments (1)
jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh (1)

25-27: Fix incorrect package name: use gcc-c++ instead of gcc-g++.

The package gcc-g++ does not exist in the UBI9 repository. The correct package name for the C++ compiler is gcc-c++. This will cause the dnf install command to fail at build time, which aligns with the PR's reported issue.

This inconsistency is evident when comparing with Line 53, where the correct gcc-c++ package is used for s390x.

Apply this diff to fix the package name:

-dnf install -y --setopt=keepcache=1 gcc gcc-g++
+dnf install -y --setopt=keepcache=1 gcc gcc-c++
🧹 Nitpick comments (1)
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (1)

47-75: Excellent – package array refactoring improves maintainability.

Converted from string concatenation to bash array (PACKAGES=() and PACKAGES+=(...)) for cleaner architecture-specific package handling. Array expansion syntax ("${PACKAGES[@]}") correctly prevents word splitting. This makes it easier to add/remove packages per architecture in the future without string manipulation errors.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c015f91 and 7e23557.

📒 Files selected for processing (7)
  • codeserver/ubi9-python-3.12/Dockerfile.cpu (2 hunks)
  • jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (1 hunks)
  • jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu (1 hunks)
  • jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu (2 hunks)
  • jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh (3 hunks)
  • runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (3 hunks)
  • runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (27)
  • GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-runtime-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-cuda-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
  • GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
  • GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (rocm-runtime-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
  • GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
  • GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
🔇 Additional comments (14)
jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh (4)

21-23: UV cache controls added correctly.

The UV_NO_CACHE=false and UV_LINK_MODE=copy environment variables are appropriate for managing uv's build-time artifact passing between stages.


36-43: CRB repository enablement logic is sound.

The conditional handling for codeready-builder/crb repository (subscription-manager with dnf fallback) is correct and necessary for both RHEL-managed and standalone UBI9 environments.


45-69: Architecture-specific package arrays refactored appropriately.

The expanded and reorganized package lists for ppc64le and s390x are logically sound. The separation handles build requirements, Pillow dependencies, and compilation tools correctly per architecture. The added comments (lines 48–49, 63–64) clarify the rationale.


147-147: Minor cleanup: trailing space removed from git clone.

Trailing whitespace removal improves consistency with standard shell formatting practices.

runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu (2)

11-13: LGTM – environment variables for external PyPI index.

The PIP_EXTRA_INDEX_URL and UV_EXTRA_INDEX_URL additions align with the AIPCC migration (RHAIENG-2189) and enable access to external package repositories needed for build dependencies.


42-49: Good – architecture-specific build tools provisioning.

Conditional installation of gcc, g++, make, and related build utilities for s390x/ppc64le addresses the missing compiler issue mentioned in the PR title ("command 'g++' failed: No such file or directory"). This ensures non-amd64 architectures have required build dependencies.

jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu (1)

14-16: LGTM – environment variables for external PyPI index.

Consistent with other images in this PR (RHAIENG-2189 migration phase 1.5). Enables pip and uv to access external package repositories for improved build dependency resolution.

codeserver/ubi9-python-3.12/Dockerfile.cpu (2)

75-77: LGTM – environment variables for external PyPI index.

Consistent with other images in this PR (RHAIENG-2189 migration phase 1.5).


304-304: Good – typo fix.

Corrected "availabe" → "available" in the debugpy note.

Also applies to: 304-304

jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (1)

377-377: LGTM – s390x parity with ppc64le for build dependencies.

Extending build-time package installation (zlib-devel, libjpeg-turbo-devel, gcc-gfortran) to s390x alongside ppc64le is appropriate, since both require special handling for packages like pillow, maturin, and scikit-learn that may need compilation.

Also applies to: 377-377

jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu (2)

45-46: LGTM – RHAIENG-2111 rationale clear.

Comment correctly documents that --offline removal is necessary because pandas build requires cython from PyPI. This explains the core change enabling online access.


227-227: Good – online installation with strict verification.

Removal of --offline enables downloading build dependencies (cython for pandas). Expanded verification flags (--strict, --verify-hashes, --compile-bytecode) improve build reproducibility despite online access.

Also applies to: 227-227

runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (2)

13-15: LGTM – environment variables for external PyPI index.

Consistent with other images in this PR (RHAIENG-2189 migration phase 1.5).


309-310: Good – CMAKE_PREFIX_PATH construction fix.

Refactored from inline string concatenation to explicit variable assignment, avoiding potential leading-colon issues when CMAKE_PREFIX_PATH is unset. The ternary expansion ${CMAKE_PREFIX_PATH:+:$CMAKE_PREFIX_PATH} now works correctly: if CMAKE_PREFIX_PATH is set, prepend a colon; if unset, expand to empty string.

@openshift-ci openshift-ci bot added size/xxl and removed size/xxl labels Dec 4, 2025
@jiridanek
Copy link
Member Author

@coderabbitai regarding 25-27: Fix incorrect package name: use gcc-c++ instead of gcc-g++.

they are the same package, there is an alias

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 4, 2025

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🌐 Web query:

RHEL UBI gcc-g++ gcc-c++ package alias

💡 Result:

Short answer: On RHEL / UBI the C++ compiler package is named gcc-c++ (which provides the g++ binary). There is no separate distro-standard package called gcc-g++; use gcc-c++ (and gcc) in UBI images (e.g. ubi9 AppStream shows gcc-c++). [1][2]

References

  • UBI9 AppStream package index (shows gcc and gcc-c++). [1]
  • Red Hat documentation / articles describing system gcc and gcc-toolset (explains gcc / g++ packaging). [2]

🌐 Web query:

dnf install gcc-g++ alias RHEL package

💡 Result:

Use the gcc-c++ package. Example:

sudo dnf install gcc-c++

You can verify what provides g++ with:

dnf repoquery --whatprovides g++
or
dnf provides '*bin/g++'

Sources: Fedora docs (install name), Red Hat / community references. [1][2][3]

References
[1] Fedora Developer — C++ installation (shows sudo dnf install gcc-c++).
[2] Unix & Linux discussion — g++ is provided by the gcc-c++ package (dnf repoquery example).
[3] Red Hat / Developer documentation — installing GCC/GCC toolset on RHEL.


🏁 Script executed:

#!/bin/bash
# Check the actual content of the file mentioned in the comment
cat jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh | grep -n "gcc" | head -20

Length of output: 523


🏁 Script executed:

#!/bin/bash
# Check if there are any references to gcc-g++ vs gcc-c++ in the codebase
echo "=== Checking for gcc-g++ usage ==="
rg -n "gcc-g\+\+" --type sh

echo ""
echo "=== Checking for gcc-c++ usage ==="
rg -n "gcc-c\+\+" --type sh

Length of output: 770


🏁 Script executed:

#!/bin/bash
# Let's check if gcc-g++ might be documented as a valid alias or provides clause
# Search for any documentation or comments about this in the repo
echo "=== Checking for any documentation about gcc-g++ ==="
rg -C3 "gcc-g\+\+" 

echo ""
echo "=== Checking other Dockerfiles for gcc package patterns ==="
fd -e Dockerfile | xargs grep -l "gcc" | head -5 | xargs grep -n "dnf install.*gcc"

Length of output: 747


@jiridanek Thank you for the clarification! However, I noticed a couple of things:

  1. Documentation inconsistency: The RHEL/UBI documentation and package repositories consistently reference gcc-c++ as the standard C++ compiler package name. I couldn't find any official documentation mentioning gcc-g++ as an alias.

  2. Inconsistency within the same file: In jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh, line 27 uses gcc-g++, but line 53 (in the ppc64le/s390x section) uses gcc-c++:

    # Line 27
    dnf install -y --setopt=keepcache=1 gcc gcc-g++
    
    # Line 53
    dnf install -y --setopt=keepcache=1 gcc gcc-gfortran gcc-c++ make cmake ninja-build \
  3. Codebase consistency: Other files in the repository consistently use gcc-c++:

    • base-images/utils/aipcc.sh:144
    • codeserver/ubi9-python-3.12/devel_env_setup.sh:69

While gcc-g++ may work if it's indeed an alias, using gcc-c++ would:

  • Align with the documented standard package name
  • Be consistent with line 53 in the same file
  • Match the rest of the codebase

Would you consider using gcc-c++ on line 27 for consistency?

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 4, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ide-developer
Once this PR has been reviewed and has the lgtm label, please assign caponetto for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jiridanek
Copy link
Member Author

/kfbuild all

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Dec 4, 2025

@jiridanek: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/notebook-cuda-jupyter-tf-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test notebook-cuda-jupyter-tf-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-jupyter-tai-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test notebook-jupyter-tai-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-cuda-jupyter-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test notebook-cuda-jupyter-ubi9-python-3-12-pr-image-mirror
ci/prow/runtime-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test runtime-ubi9-python-3-12-pr-image-mirror
ci/prow/runtime-ds-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test runtime-ds-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-rocm-jupyter-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test notebook-rocm-jupyter-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-cuda-jupyter-pt-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test notebook-cuda-jupyter-pt-ubi9-python-3-12-pr-image-mirror
ci/prow/codeserver-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test codeserver-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-jupyter-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test notebook-jupyter-ubi9-python-3-12-pr-image-mirror
ci/prow/runtime-cuda-pt-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test runtime-cuda-pt-ubi9-python-3-12-pr-image-mirror
ci/prow/rocm-runtime-pt-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test rocm-runtime-pt-ubi9-python-3-12-pr-image-mirror
ci/prow/runtime-cuda-tf-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test runtime-cuda-tf-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-rocm-jupyter-pt-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test notebook-rocm-jupyter-pt-ubi9-python-3-12-pr-image-mirror
ci/prow/notebook-jupyter-ds-ubi9-python-3-12-pr-image-mirror 7e23557 link true /test notebook-jupyter-ds-ubi9-python-3-12-pr-image-mirror
ci/prow/images 7e23557 link true /test images
ci/prow/rocm-notebooks-py312-e2e-tests 7e23557 link true /test rocm-notebooks-py312-e2e-tests
ci/prow/notebooks-py312-ubi9-e2e-tests 7e23557 link true /test notebooks-py312-ubi9-e2e-tests

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@jiridanek jiridanek merged commit df7f474 into opendatahub-io:main Dec 4, 2025
49 of 92 checks passed
@jiridanek jiridanek deleted the jd_trustybuild branch December 4, 2025 15:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lgtm review-requested GitHub Bot creates notification on #pr-review-ai-ide-team slack channel size/xxl

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants