-
Notifications
You must be signed in to change notification settings - Fork 124
RHAIENG-2101, RHAIENG-2111: chore(build): fix builds in 3.2 #2719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Skipping CI for Draft Pull Request. |
WalkthroughMultiple build and development scripts were updated: the devel_env_setup.sh replaces gcc-toolset-13 with standard gcc/g++, expands architecture-specific package lists and CRB handling; several UBI9 Python 3.12 Dockerfiles and runtime scripts add PIP/UV extra-index URLs, adjust uv pip invocations/flags, and introduce package-array and architecture branching refactors. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20–30 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Red Hat Konflux / odh-workbench-jupyter-trustyai-cpu-py312-ubi9-on-pull-request
- GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
🔇 Additional comments (1)
jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh (1)
132-132: Minor cleanup: trailing whitespace removal.The trailing space in the git clone command for apache-arrow has been cleanly removed, improving consistency.
| # compiling jpype1==1.5.0 requires g++ and this gets compiled on all platforms | ||
| # gcc and g++ is present by default on registry.access.redhat.com/ubi9/python-312:latest | ||
| dnf install -y --setopt=keepcache=1 gcc gcc-g++ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect package name: use gcc-c++ instead of gcc-g++.
The package gcc-g++ does not exist in the UBI9 repository. The correct package name for the C++ compiler is gcc-c++. This will cause the dnf install command to fail.
This inconsistency is evident when comparing with line 41, where the correct gcc-c++ package is used for s390x.
Apply this diff to fix the package name:
-dnf install -y --setopt=keepcache=1 gcc gcc-g++
+dnf install -y --setopt=keepcache=1 gcc gcc-c++📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # compiling jpype1==1.5.0 requires g++ and this gets compiled on all platforms | |
| # gcc and g++ is present by default on registry.access.redhat.com/ubi9/python-312:latest | |
| dnf install -y --setopt=keepcache=1 gcc gcc-g++ | |
| # compiling jpype1==1.5.0 requires g++ and this gets compiled on all platforms | |
| # gcc and g++ is present by default on registry.access.redhat.com/ubi9/python-312:latest | |
| dnf install -y --setopt=keepcache=1 gcc gcc-c++ |
🤖 Prompt for AI Agents
In jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh around lines 21 to 23,
the dnf install uses the non-existent package name gcc-g++; replace it with the
correct UBI9 package name gcc-c++ so the C++ compiler installs correctly (make
the same change as used on line 41), and ensure the dnf command remains
otherwise unchanged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai think it through again; either works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if both gcc-c++ and gcc-g++ packages exist in UBI9 repositories
# We'll use dnf to query package information
# First, let's check what packages provide g++ in UBI9
echo "=== Checking gcc-c++ package ==="
dnf info gcc-c++ 2>&1 || echo "gcc-c++ query failed"
echo ""
echo "=== Checking gcc-g++ package ==="
dnf info gcc-g++ 2>&1 || echo "gcc-g++ query failed"
echo ""
echo "=== Checking what provides g++ ==="
dnf provides g++ 2>&1 || echo "g++ provides query failed"
echo ""
echo "=== Searching for gcc related packages ==="
dnf search gcc | grep -E "(gcc-c\+\+|gcc-g\+\+)" || echo "search completed"
Length of output: 402
🌐 Web query:
UBI9 RHEL9 gcc-g++ package dnf install
💡 Result:
Short answer: install the RHEL/UBI package named gcc-c++ (it provides g++):
Commands (run as root inside a UBI9 / RHEL9 environment)
- dnf install -y gcc-c++
- Verify: g++ --version
If you want a newer toolset instead of system gcc, use Red Hat's GCC Toolset container or packages (e.g. gcc-toolset-14 / gcc-toolset-13) and enable/run it via scl or the toolset container. See Red Hat docs for the toolset options. [1][2][3]
Sources:
- UBI9 package repository showing gcc-c++ package. [1]
- Red Hat docs: GCC Toolset (gcc-toolset-N providing gcc-c++). [2]
- Red Hat UBI9 description / images. [3]
🌐 Web query:
"gcc-g++" package RHEL dnf
💡 Result:
There is no package named "gcc-g++" on RHEL. The C++ compiler (g++) is provided by the gcc-c++ package (or by the "Development Tools" group). Example commands:
- Install just g++: sudo dnf install gcc-c++
- Install full dev toolchain: sudo dnf groupinstall "Development Tools"
- To find the package that supplies g++ on your system: dnf provides '*bin/g++'
Sources: Red Hat documentation and community guidance. [1][2]
References
[1] Red Hat Developer — Install GCC / Development Tools on RHEL.
[2] Unix StackExchange / community answers showing gcc-c++ provides /usr/bin/g++.
🌐 Web query:
"gcc-g++" alternative package name alias RHEL UBI
💡 Result:
Use the rpm named gcc-c++ (provides the g++ C++ compiler). [1][2]
Sources:
- UBI AppStream packages index showing gcc-c++ RPM. [1]
- Red Hat docs on GCC packaging/versions (RHEL uses gcc / gcc-c++). [2]
🌐 Web query:
UBI9 virtual package provides gcc-g++
💡 Result:
Short answer: No — UBI 9 doesn't use a "gcc-g++" virtual package. The C++ compiler is provided by the gcc-c++ (or gcc-c++/gcc‑11… rpm) packages in the UBI 9 AppStream/BaseOS repos (e.g., gcc-c++ and gcc packages are present in the UBI9 repo). [1]
Sources:
- UBI 9 AppStream package index (shows gcc and gcc-c++ RPMs). [1]
🌐 Web query:
dnf install typo correction "did you mean" package suggestions
💡 Result:
Yes — install the PackageKit command-not-found handler. It provides “did you mean” / offer-to-install suggestions (and with dnf5 can auto-install and rerun the command).
Commands:
- sudo dnf install PackageKit-command-not-found
- Log out/in (or source the shell file): source /etc/profile.d/PackageKit.sh
References: Fedora feature/docs for PackageKit command-not-found [1], the PackageKit package page [2], and a writeup showing dnf5 + command‑not‑found behavior [3].
🌐 Web query:
dnf automatic package name correction typo fix install
💡 Result:
If you mean the DNF automatic-updates feature: install the dnf-automatic package (or the dnf5 plugin on systems using dnf5), enable its timer, and edit the config.
Commands:
- Fedora (dnf):
sudo dnf install dnf-automatic
sudo systemctl enable --now dnf-automatic.timer - Systems with dnf5 (Fedora newer releases):
sudo dnf install dnf5-plugin-automatic
sudo systemctl enable --now dnf-automatic.timer
Edit /etc/dnf/automatic.conf to choose download-only, apply_updates=True, reboot behavior, etc. Check status with:
systemctl status dnf-automatic.timer
systemctl list-timers --all | grep dnf-automatic
Sources: Fedora DNF docs and dnf-automatic package page. [1][2][3]
Sources
[1] Using the DNF software package manager — Fedora Docs. [2] dnf-automatic — Fedora Packages. [3] Automatic Updates — Fedora Docs.
🏁 Script executed:
#!/bin/bash
# Let's check the actual file content to see if gcc-g++ is used elsewhere
# and look at the context around line 41 mentioned in my original review
echo "=== Searching for gcc-c++ usage in the repository ==="
rg -n "gcc-c\+\+" --type sh
echo ""
echo "=== Searching for gcc-g++ usage in the repository ==="
rg -n "gcc-g\+\+" --type sh
echo ""
echo "=== Looking at the context around line 41 in the file ==="
sed -n '35,50p' jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh
Length of output: 1655
@jiridanek, I've investigated this further and found some evidence that suggests gcc-g++ may not be a valid package name:
Evidence from documentation:
- Multiple sources confirm that the C++ compiler (g++) is provided by the gcc-c++ package in UBI9/RHEL9
- No "gcc-g++" virtual package exists in UBI 9
Evidence from this codebase:
- Line 53 in this same file uses
gcc-c++for the s390x architecture - Other files in the repository consistently use
gcc-c++:base-images/utils/aipcc.sh:144codeserver/ubi9-python-3.12/devel_env_setup.sh:69
- Only line 27 (the new code in this PR) uses
gcc-g++
This inconsistency within the same file is notable. If gcc-g++ were valid, it would be unusual for line 53 to use a different package name.
Could you verify which package name actually succeeds in your testing environment? If gcc-g++ does work in your tests, I'd be interested to understand the mechanism, as it doesn't appear to be a standard package name in the UBI9 repositories.
That said, I defer to your testing results—if you've confirmed both work, perhaps there's a package alias or DNF behavior I'm not aware of.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai create github issue to fix this later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Create a GitHub issue to track the gcc package naming inconsistency
gh issue create \
--repo opendatahub-io/notebooks \
--title "Inconsistent gcc C++ compiler package names in devel_env_setup.sh" \
--body "## Description
There is an inconsistency in the package names used for installing the C++ compiler in \`jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh\`:
- Line 23 uses \`gcc-g++\`
- Line 53 uses \`gcc-c++\`
While both package names appear to work in testing, \`gcc-c++\` is the standard package name in UBI9/RHEL9 repositories and is used consistently throughout the rest of the codebase.
## Files affected
- \`jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh\`
## Context
This inconsistency was identified during review of PR #2719. The issue is deferred for later resolution to avoid blocking the current PR.
## References
- PR: https://github.com/opendatahub-io/notebooks/pull/2719
- Discussion: https://github.com/opendatahub-io/notebooks/pull/2719#discussion_r2559331988
- Requested by: @jiridanek" \
--assignee jiridanek
Length of output: 55
@jiridanek, I've created issue #2755 to track this inconsistency: #2755
The issue documents the difference between gcc-g++ (line 23) and gcc-c++ (line 53) usage in the file, and includes references back to this PR and discussion.
…g++' failed: No such file or directory
… left on device" due to AIPCC base disabling `uv` cache (cherry picked from commit f270c22)
(cherry picked from commit 74dfec9)
…vice" due to AIPCC base disabling `uv` cache (cherry picked from commit 0f5b57e)
…her python requirements ``` The headers or library files could not be found for jpeg, a required dependency when compiling Pillow from source. Please see the install instructions at: https://pillow.readthedocs.io/en/latest/installation/basic-installation.html × Failed to build `/tmp/tmp.mHgTyZTiW2/Pillow` ├─▶ The build backend returned an error ╰─▶ Call to `backend.build_wheel` failed (exit status: 1) hint: This usually indicates a problem with the package or the build environment. subprocess exited with status 2 ``` ``` Could not find directory of OpenSSL installation, and this `-sys` crate cannot proceed without this knowledge. If OpenSSL is installed and this crate had trouble finding it, you can set the `OPENSSL_DIR` environment variable for the compilation process. Make sure you also have the development packages of openssl installed. For example, `libssl-dev` on Ubuntu or `openssl-devel` on Fedora. If you're in a situation where you think the directory *should* be found automatically, please open a bug at https://github.com/sfackler/rust-openssl and include information about your system as well as this message. ... warning: build failed, waiting for other jobs to finish... 💥 maturin failed Caused by: Failed to build a native library through cargo Caused by: Cargo build finished with "exit status: 101": `env -u CARGO PYO3_BUILD_EXTENSION_MODULE="1" PYO3_ENVIRONMENT_SIGNATURE="cpython-3.12-64bit" PYO3_PYTHON="/root/.cache/uv/builds-v0/.tmpsjsmaI/bin/python" PYTHON_SYS_EXECUTABLE="/root/.cache/uv/builds-v0/.tmpsjsmaI/bin/python" "cargo" "rustc" "--profile" "release" "--features" "pyo3/abi3-py37" "--message-format" "json-render-diagnostics" "--locked" "--manifest-path" "/root/.cache/uv/sdists-v9/index/c2c553577285dd6f/cryptography/43.0.3/0S5Vw6G2gpWxT0EJkPki9/src/src/rust/Cargo.toml" "--lib"` Error: command ['maturin', 'pep517', 'build-wheel', '-i', '/root/.cache/uv/builds-v0/.tmpsjsmaI/bin/python', '--compatibility', 'off'] returned non-zero exit status 1 hint: This usually indicates a problem with the package or the build environment. subprocess exited with status 1 ``` ``` + uv pip install /wheelsdir/pillow-11.3.0-cp312-cp312-manylinux_2_34_ppc64le.whl /wheelsdir/pyarrow-20.0.0-cp312-cp312-linux_ppc64le.whl /wheelsdir/torch-2.7.1-cp312-cp312-linux_ppc64le.whl accelerate --cache-dir /root/.cache/uv ... × No solution found when resolving dependencies: ╰─▶ Because numpy==2.3.4 has no wheels with a matching platform tag (e.g., `manylinux_2_34_ppc64le`) and only numpy==2.3.4 is available, we can conclude that numpy==2.3.4 cannot be used. And because accelerate==1.10.1 depends on numpy==2.3.4 and only the following versions of accelerate are available: accelerate==1.10.1 accelerate==1.11.0 we can conclude that accelerate<1.11.0 cannot be used. (1) Because numpy==2.3.4 has no wheels with a matching platform tag (e.g., `manylinux_2_34_ppc64le`) and only numpy==2.3.4 is available, we can conclude that numpy==2.3.4 cannot be used. And because accelerate==1.11.0 depends on numpy==2.3.4, we can conclude that accelerate==1.11.0 cannot be used. And because we know from (1) that accelerate<1.11.0 cannot be used, we can conclude that all versions of accelerate cannot be used. And because you require accelerate, we can conclude that your requirements are unsatisfiable. hint: Wheels are available for `numpy` (v2.3.4) on the following platforms: `linux_aarch64`, `linux_x86_64` subprocess exited with status 1 ``` (cherry picked from commit 6f83dd5)
…on ppc64le it reports missing packages ``` + uv pip install --offline --cache-dir /root/.cache/uv --extra-index-url https://pypi.org/simple --requirements=./pylock.toml warning: The `--pylock` option is experimental and may change without warning. Pass `--preview-features pylock` to disable this warning. Using Python 3.12.11 environment at: /opt/app-root × Failed to download `wheel==0.45.1` ╰─▶ Network connectivity is disabled, but the requested data wasn't found in the cache for: `https://files.pythonhosted.org/packages/0b/2c/87f3254fd8ffd29e4c02732eee68a83a1d3c346ae39bc6822dcbcb697f2b/wheel-0.45.1-py3-none-any.whl` subprocess exited with status 1 ``` (cherry picked from commit fc72836)
…ion phase 1.5 (cherry picked from commit 73ff59c)
…gration phase 1.5 (cherry picked from commit 1580229)
…ined variable ``` ++ pwd + ARROW_HOME=/root/arrow/dist + export ARROW_HOME ++ pwd + LD_LIBRARY_PATH=/root/arrow/dist/lib + export LD_LIBRARY_PATH /bin/bash: line 10: CMAKE_PREFIX_PATH: unbound variable subprocess exited with status 1 ``` (cherry picked from commit b2cd7f7)
… it causes build failure on ppc64le
```
+ uv pip install --offline --cache-dir /root/.cache/uv --requirements=./pylock.toml
warning: The `--pylock` option is experimental and may change without warning. Pass `--preview-features pylock` to disable this warning.
Using Python 3.12.9 environment at: /opt/app-root
× Failed to download `setuptools==80.9.0`
╰─▶ Network connectivity is disabled, but the
requested data wasn't found in the cache for:
`https://files.pythonhosted.org/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl`
subprocess exited with status 1
```
(cherry picked from commit e5ee8cb)
c015f91 to
7e23557
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
codeserver/ubi9-python-3.12/Dockerfile.cpu (1)
299-310: Update stale comment on line 302.The inline comment states "we can ensure wheels are consumed from the cache only by restricting internet access for uv install with '--offline' flag", but line 303 actually removes the
--offlineflag to enable online access to PyPI. This comment no longer reflects the current behavior and is misleading for future maintainers.Update the comment to clarify that
--offlinewas removed to enable downloading build dependencies (e.g., cython) from PyPI while still leveraging the cache where possible viaUV_LINK_MODE=copyand--cache-dir.jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu (1)
222-230: Update stale comment on line 225.The inline comment states "we can ensure wheels are consumed from the cache only by restricting internet access for uv install with '--offline' flag", but line 227 now removes
--offlineto enable downloading build dependencies from PyPI (as documented in the RHAIENG-2111 comment above). This comment is misleading and should be updated to reflect the new online-with-cache strategy.
♻️ Duplicate comments (1)
jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh (1)
25-27: Fix incorrect package name: usegcc-c++instead ofgcc-g++.The package
gcc-g++does not exist in the UBI9 repository. The correct package name for the C++ compiler isgcc-c++. This will cause thednf installcommand to fail at build time, which aligns with the PR's reported issue.This inconsistency is evident when comparing with Line 53, where the correct
gcc-c++package is used for s390x.Apply this diff to fix the package name:
-dnf install -y --setopt=keepcache=1 gcc gcc-g++ +dnf install -y --setopt=keepcache=1 gcc gcc-c++
🧹 Nitpick comments (1)
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (1)
47-75: Excellent – package array refactoring improves maintainability.Converted from string concatenation to bash array (
PACKAGES=()andPACKAGES+=(...)) for cleaner architecture-specific package handling. Array expansion syntax ("${PACKAGES[@]}") correctly prevents word splitting. This makes it easier to add/remove packages per architecture in the future without string manipulation errors.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
codeserver/ubi9-python-3.12/Dockerfile.cpu(2 hunks)jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu(1 hunks)jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu(1 hunks)jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu(2 hunks)jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh(3 hunks)runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu(3 hunks)runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (27)
- GitHub Check: build (rocm-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-runtime-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-datascience-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (rocm-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (codeserver-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (runtime-cuda-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-cuda-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-datascience-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/s390x, false) / build
- GitHub Check: build (jupyter-minimal-ubi9-python-3.12, 3.12, linux/ppc64le, false) / build
- GitHub Check: build (jupyter-trustyai-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (rocm-runtime-tensorflow-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (runtime-minimal-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-minimal-ubi9-python-3.12, 3.12, linux/arm64, false) / build
- GitHub Check: build (rocm-jupyter-pytorch-ubi9-python-3.12, 3.12, linux/amd64, false) / build
- GitHub Check: build (cuda-jupyter-pytorch-llmcompressor-ubi9-python-3.12, 3.12, linux/amd64, false) / build
🔇 Additional comments (14)
jupyter/trustyai/ubi9-python-3.12/devel_env_setup.sh (4)
21-23: UV cache controls added correctly.The
UV_NO_CACHE=falseandUV_LINK_MODE=copyenvironment variables are appropriate for managing uv's build-time artifact passing between stages.
36-43: CRB repository enablement logic is sound.The conditional handling for codeready-builder/crb repository (subscription-manager with dnf fallback) is correct and necessary for both RHEL-managed and standalone UBI9 environments.
45-69: Architecture-specific package arrays refactored appropriately.The expanded and reorganized package lists for ppc64le and s390x are logically sound. The separation handles build requirements, Pillow dependencies, and compilation tools correctly per architecture. The added comments (lines 48–49, 63–64) clarify the rationale.
147-147: Minor cleanup: trailing space removed from git clone.Trailing whitespace removal improves consistency with standard shell formatting practices.
runtimes/minimal/ubi9-python-3.12/Dockerfile.cpu (2)
11-13: LGTM – environment variables for external PyPI index.The
PIP_EXTRA_INDEX_URLandUV_EXTRA_INDEX_URLadditions align with the AIPCC migration (RHAIENG-2189) and enable access to external package repositories needed for build dependencies.
42-49: Good – architecture-specific build tools provisioning.Conditional installation of
gcc,g++,make, and related build utilities for s390x/ppc64le addresses the missing compiler issue mentioned in the PR title ("command 'g++' failed: No such file or directory"). This ensures non-amd64 architectures have required build dependencies.jupyter/minimal/ubi9-python-3.12/Dockerfile.cpu (1)
14-16: LGTM – environment variables for external PyPI index.Consistent with other images in this PR (RHAIENG-2189 migration phase 1.5). Enables pip and uv to access external package repositories for improved build dependency resolution.
codeserver/ubi9-python-3.12/Dockerfile.cpu (2)
75-77: LGTM – environment variables for external PyPI index.Consistent with other images in this PR (RHAIENG-2189 migration phase 1.5).
304-304: Good – typo fix.Corrected "availabe" → "available" in the debugpy note.
Also applies to: 304-304
jupyter/datascience/ubi9-python-3.12/Dockerfile.cpu (1)
377-377: LGTM – s390x parity with ppc64le for build dependencies.Extending build-time package installation (zlib-devel, libjpeg-turbo-devel, gcc-gfortran) to s390x alongside ppc64le is appropriate, since both require special handling for packages like pillow, maturin, and scikit-learn that may need compilation.
Also applies to: 377-377
jupyter/trustyai/ubi9-python-3.12/Dockerfile.cpu (2)
45-46: LGTM – RHAIENG-2111 rationale clear.Comment correctly documents that
--offlineremoval is necessary because pandas build requires cython from PyPI. This explains the core change enabling online access.
227-227: Good – online installation with strict verification.Removal of
--offlineenables downloading build dependencies (cython for pandas). Expanded verification flags (--strict,--verify-hashes,--compile-bytecode) improve build reproducibility despite online access.Also applies to: 227-227
runtimes/datascience/ubi9-python-3.12/Dockerfile.cpu (2)
13-15: LGTM – environment variables for external PyPI index.Consistent with other images in this PR (RHAIENG-2189 migration phase 1.5).
309-310: Good – CMAKE_PREFIX_PATH construction fix.Refactored from inline string concatenation to explicit variable assignment, avoiding potential leading-colon issues when
CMAKE_PREFIX_PATHis unset. The ternary expansion${CMAKE_PREFIX_PATH:+:$CMAKE_PREFIX_PATH}now works correctly: ifCMAKE_PREFIX_PATHis set, prepend a colon; if unset, expand to empty string.
|
@coderabbitai regarding 25-27: Fix incorrect package name: use gcc-c++ instead of gcc-g++. they are the same package, there is an alias |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🌐 Web query: 💡 Result: Short answer: On RHEL / UBI the C++ compiler package is named gcc-c++ (which provides the g++ binary). There is no separate distro-standard package called gcc-g++; use gcc-c++ (and gcc) in UBI images (e.g. ubi9 AppStream shows gcc-c++). [1][2] References
🌐 Web query: 💡 Result: Use the gcc-c++ package. Example: sudo dnf install gcc-c++ You can verify what provides g++ with: dnf repoquery --whatprovides g++ Sources: Fedora docs (install name), Red Hat / community references. [1][2][3] References 🏁 Script executed: Length of output: 523 🏁 Script executed: Length of output: 770 🏁 Script executed: Length of output: 747 @jiridanek Thank you for the clarification! However, I noticed a couple of things:
While
Would you consider using |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: ide-developer The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/kfbuild all |
|
@jiridanek: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
https://issues.redhat.com/browse/RHAIENG-2101
https://issues.redhat.com/browse/RHAIENG-2111
Description
How Has This Been Tested?
Self checklist (all need to be checked):
make test(gmakeon macOS) before asking for reviewDockerfile.konfluxfiles should be done inodh/notebooksand automatically synced torhds/notebooks. For Konflux-specific changes, modifyDockerfile.konfluxfiles directly inrhds/notebooksas these require special attention in the downstream repository and flow to the upcoming RHOAI release.Merge criteria:
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.