Skip to content

Conversation

@phuongfi91
Copy link
Contributor

@phuongfi91 phuongfi91 commented Dec 12, 2025

User description

Also removed pulumi, it should be installed only in project that need it


PR Type

Enhancement


Description

  • Switch base image from Ubuntu 25.10 to Debian 13 for stability

  • Introduce build arguments for tool version management

  • Use multi-stage builds to copy pre-built binaries from official images

  • Remove Pulumi installation, making it project-specific

  • Upgrade Java from OpenJDK 17 to 21

  • Improve package management with testing repo pinning


Diagram Walkthrough

flowchart LR
  A["Ubuntu 25.10"] -->|"Replace with"| B["Debian 13"]
  C["Direct tool downloads"] -->|"Replace with"| D["Multi-stage builds"]
  E["Fixed tool versions"] -->|"Replace with"| F["Configurable ARG versions"]
  G["Pulumi included"] -->|"Remove"| H["Project-specific install"]
  I["OpenJDK 17"] -->|"Upgrade to"| J["OpenJDK 21"]
Loading

File Walkthrough

Relevant files
Enhancement
Dockerfile
Debian base image and multi-stage tool installation           

Dockerfile

  • Replace Ubuntu 25.10 base image with Debian 13 for improved stability
  • Add build arguments for version control of actionlint, hadolint,
    shellcheck, shfmt, uv, reviewdog, and snyk
  • Implement multi-stage builds to copy pre-built binaries from official
    Docker images
  • Add Debian testing repository with pinning configuration for
    unavailable packages
  • Upgrade OpenJDK from version 17 to 21
  • Refactor tool installation logic with improved version handling and
    checksum verification
  • Remove Pulumi installation entirely
  • Separate installation steps into distinct RUN commands for better
    caching
+102/-38
Tests
specs.yaml
Remove Pulumi installation test                                                   

tests/specs.yaml

  • Remove Pulumi version test command
+0/-4     

Copilot AI review requested due to automatic review settings December 12, 2025 13:42
@gemini-code-assist
Copy link

Summary of Changes

Hello @phuongfi91, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly overhauls the Dockerfile by transitioning its base from Ubuntu to Debian 13, aiming for a more stable and well-supported environment. It modernizes the installation of several development tools through multi-stage builds and introduces version arguments for greater flexibility. Key updates include upgrading OpenJDK and refining the Snyk CLI installation with enhanced security measures, while also removing the default installation of Pulumi to encourage project-specific dependency management.

Highlights

  • Base Image Migration: The Dockerfile's base image has been switched from ubuntu:25.10 to debian:13, aiming for a more stable and well-supported environment.
  • Multi-Stage Tool Installation: The installation of actionlint, hadolint, shellcheck, and shfmt has been refactored to leverage multi-stage builds, copying pre-built binaries from dedicated images for efficiency.
  • Version Parameterization: New ARG variables have been introduced for managing versions of various tools (e.g., actionlint, hadolint, uv, snyk), allowing for easier updates and customization.
  • Pulumi Removal: The global installation of pulumi has been removed from the Dockerfile and its corresponding test, aligning with the principle of installing it only when explicitly needed by a project.
  • Debian Repository Management: A mechanism has been implemented to prioritize stable Debian packages while temporarily enabling the testing repository to install specific, potentially newer, dependencies like watchman.
  • Snyk Installation Enhancement: The Snyk CLI installation has been improved to dynamically fetch the correct binary URL and verify its integrity using SHA256 checksums, enhancing security and robustness.
  • OpenJDK Update: The installed Java Development Kit has been upgraded from OpenJDK 17 to OpenJDK 21.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@qodo-code-review
Copy link

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
🔴
Unsafed remote script

Description: Installing uv via a remote install script piped to sh without verifying publisher
signature or checksum permits command injection or supply-chain compromise if the script
is tampered with; fetch a pinned release and verify checksum/signature.
Dockerfile [117-121]

Referred Code
RUN UV_VER="${UV_VERSION#v}" \
    && UV_INSTALL_URL=$([ "${UV_VER}" = "latest" ] \
    && echo "https://astral.sh/uv/install.sh" || \
    echo "https://astral.sh/uv/${UV_VER}/install.sh") \
    && curl -LsSf "${UV_INSTALL_URL}" | env UV_INSTALL_DIR="/usr/local/bin" sh
Unsafed remote script

Description: reviewdog install script is fetched from a hard-coded commit URL and piped to sh without
signature verification, exposing the build to command injection if transport or hosting is
compromised; use a pinned binary/release asset and verify checksum/signature.
Dockerfile [124-126]

Referred Code
RUN curl -sfL "https://raw.githubusercontent.com/reviewdog/reviewdog/fd59714416d6d9a1c0692d872e38e7f8448df4fc/install.sh" \
    | sh -s -- -b /usr/local/bin \
    "$([ "${REVIEWDOG_VERSION}" != "latest" ] && echo "${REVIEWDOG_VERSION}" || echo "")"
Unstable repo usage

Description: Temporarily enabling Debian 'testing' repository to install packages (e.g., watchman)
increases supply-chain/compatibility risk due to less vetted packages and future upgrades
pulling in testing deps if pinning is bypassed; ensure strict pinning per-package and
audit necessity.
Dockerfile [57-69]

Referred Code
# Temporarily enable 'testing' repo for outdated/unavailable packages in 'stable' repo,
# especially those that are difficult to build/install elsewhere
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
    --mount=type=cache,target=/var/lib/apt,sharing=locked \
    echo "deb http://deb.debian.org/debian testing main" > /etc/apt/sources.list.d/testing.list \
    && apt-get update \
    && apt-get install -y --no-install-recommends --no-install-suggests -t testing \
    # Required for pyre vscode extension
    watchman \
    # Disable 'testing' repo afterwards to prevents potential issues
    # where only stable packages are expected (e.g. playwright install-deps)
    && sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/testing.list
Metadata trust boundary

Description: Although Snyk binary checksum is verified, the release metadata (URL and SHA256) is
obtained over HTTPS and trusted implicitly; if the metadata endpoint is compromised it
could direct to a malicious binary with matching checksum—consider cross-validating via
signed releases or GPG verification when available.
Dockerfile [129-135]

Referred Code
RUN RELEASE_JSON=$(curl -s "https://downloads.snyk.io/cli/${SNYK_VERSION}/release.json") \
    && BINARY_NAME="snyk-linux$([ "${TARGETARCH}" = "arm64" ] && echo "-arm64" || echo "")" \
    && SNYK_URL=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".url") \
    && SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256" | awk '{print $1}') \
    && curl -o /usr/local/bin/snyk -L "${SNYK_URL}" \
    && echo "${SNYK_SHA256}  /usr/local/bin/snyk" | sha256sum -c - \
    && chmod +x /usr/local/bin/snyk
Unpinned external images

Description: Binaries (hadolint, actionlint, shellcheck, shfmt) are copied from external images without
digest pinning, so tag drift could introduce unintended versions or compromised images;
pin source images by immutable digests and verify checksums of copied binaries where
possible.
Dockerfile [138-148]

Referred Code
COPY --from=hadolint /bin/hadolint /usr/local/bin/hadolint

# Install actionlint
COPY --from=actionlint /usr/local/bin/actionlint /usr/local/bin/actionlint

# Install shellcheck
# Required for shellcheck vscode extension and actionlint
COPY --from=shellcheck /bin/shellcheck /usr/local/bin/shellcheck

# Install shfmt (Shell formatter)
COPY --from=shfmt /bin/shfmt /usr/local/bin/shfmt
Ticket Compliance
🎫 No ticket provided
  • Create ticket/issue
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
No auditing: The new Dockerfile changes add tooling and OS changes but introduce no audit logging for
critical actions, which may be acceptable for a base image but cannot be verified from the
diff.

Referred Code
# syntax=docker/dockerfile:1

# Version of actionlint to install: latest, or specific version number WITHOUT 'v' prefix e.g. 1.7.5
ARG ACTIONLINT_VERSION=latest
# Version of hadolint to install: latest, or specific version number e.g. v2.14.0
ARG HADOLINT_VERSION=latest
# Version of shellcheck to install: latest, or specific version number e.g. v0.11.0
ARG SHELLCHECK_VERSION=latest
# Version of shfmt to install: latest, or specific version number e.g. v3.12.0
ARG SHFMT_VERSION=latest
# Version of uv to install: latest, or specific version number e.g. v0.9.17
ARG UV_VERSION=latest
# Version of reviewdog to install: latest, or specific version number e.g. v0.21.0
ARG REVIEWDOG_VERSION=latest
# Version of Snyk to install: stable, latest, or specific version number e.g. v1.1301.1
ARG SNYK_VERSION=stable

# Images which we can directly copy the binaries from
FROM rhysd/actionlint:${ACTIONLINT_VERSION} AS actionlint
FROM hadolint/hadolint:${HADOLINT_VERSION} AS hadolint
FROM koalaman/shellcheck:${SHELLCHECK_VERSION} AS shellcheck


 ... (clipped 128 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Curl checksums: Some downloads use checksums (Snyk) but others (uv install script, reviewdog script)
execute remote scripts without checksum verification or fallback handling.

Referred Code
# Install uv
RUN UV_VER="${UV_VERSION#v}" \
    && UV_INSTALL_URL=$([ "${UV_VER}" = "latest" ] \
    && echo "https://astral.sh/uv/install.sh" || \
    echo "https://astral.sh/uv/${UV_VER}/install.sh") \
    && curl -LsSf "${UV_INSTALL_URL}" | env UV_INSTALL_DIR="/usr/local/bin" sh

# Install reviewdog
RUN curl -sfL "https://raw.githubusercontent.com/reviewdog/reviewdog/fd59714416d6d9a1c0692d872e38e7f8448df4fc/install.sh" \
    | sh -s -- -b /usr/local/bin \
    "$([ "${REVIEWDOG_VERSION}" != "latest" ] && echo "${REVIEWDOG_VERSION}" || echo "")"

# Install snyk
RUN RELEASE_JSON=$(curl -s "https://downloads.snyk.io/cli/${SNYK_VERSION}/release.json") \
    && BINARY_NAME="snyk-linux$([ "${TARGETARCH}" = "arm64" ] && echo "-arm64" || echo "")" \
    && SNYK_URL=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".url") \
    && SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256" | awk '{print $1}') \
    && curl -o /usr/local/bin/snyk -L "${SNYK_URL}" \
    && echo "${SNYK_SHA256}  /usr/local/bin/snyk" | sha256sum -c - \
    && chmod +x /usr/local/bin/snyk

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status:
Unpinned scripts: Remote install scripts (uv via install.sh URL variant and reviewdog commit-specific script
without checksum) are executed without signature verification, which may introduce
supply-chain risk.

Referred Code
# Install uv
RUN UV_VER="${UV_VERSION#v}" \
    && UV_INSTALL_URL=$([ "${UV_VER}" = "latest" ] \
    && echo "https://astral.sh/uv/install.sh" || \
    echo "https://astral.sh/uv/${UV_VER}/install.sh") \
    && curl -LsSf "${UV_INSTALL_URL}" | env UV_INSTALL_DIR="/usr/local/bin" sh

# Install reviewdog
RUN curl -sfL "https://raw.githubusercontent.com/reviewdog/reviewdog/fd59714416d6d9a1c0692d872e38e7f8448df4fc/install.sh" \
    | sh -s -- -b /usr/local/bin \
    "$([ "${REVIEWDOG_VERSION}" != "latest" ] && echo "${REVIEWDOG_VERSION}" || echo "")"

Learn more about managing compliance generic rules or creating your own custom rules

Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-code-review
Copy link

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Unify tool installation using multi-stage builds

Standardize the installation of all third-party tools like Snyk and Reviewdog by
using multi-stage builds with their official Docker images. This will create a
consistent, simpler, and more cacheable installation process.

Examples:

Dockerfile [124-135]
RUN curl -sfL "https://raw.githubusercontent.com/reviewdog/reviewdog/fd59714416d6d9a1c0692d872e38e7f8448df4fc/install.sh" \
    | sh -s -- -b /usr/local/bin \
    "$([ "${REVIEWDOG_VERSION}" != "latest" ] && echo "${REVIEWDOG_VERSION}" || echo "")"

# Install snyk
RUN RELEASE_JSON=$(curl -s "https://downloads.snyk.io/cli/${SNYK_VERSION}/release.json") \
    && BINARY_NAME="snyk-linux$([ "${TARGETARCH}" = "arm64" ] && echo "-arm64" || echo "")" \
    && SNYK_URL=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".url") \
    && SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256" | awk '{print $1}') \
    && curl -o /usr/local/bin/snyk -L "${SNYK_URL}" \

 ... (clipped 2 lines)

Solution Walkthrough:

Before:

# ... (multi-stage builds for actionlint, hadolint, etc.)

# Install reviewdog via script
RUN curl -sfL "https://.../install.sh" \
     | sh -s -- -b /usr/local/bin \
     "$([ "${REVIEWDOG_VERSION}" != "latest" ] && echo "${REVIEWDOG_VERSION}" || echo "")"

# Install snyk via custom script with json parsing
RUN RELEASE_JSON=$(curl -s "https://.../release.json") \
    && BINARY_NAME="snyk-linux$([ "${TARGETARCH}" = "arm64" ] && echo "-arm64" || echo "")" \
    && SNYK_URL=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".url") \
    && SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256" | awk '{print $1}') \
    && curl -o /usr/local/bin/snyk -L "${SNYK_URL}" \
    && echo "${SNYK_SHA256}  /usr/local/bin/snyk" | sha256sum -c - \
    && chmod +x /usr/local/bin/snyk

# Copy other tools from multi-stage builds
COPY --from=hadolint /bin/hadolint /usr/local/bin/hadolint
# ...

After:

# ... ARGs for versions ...
ARG SNYK_VERSION=stable
ARG REVIEWDOG_VERSION=latest

# Images which we can directly copy the binaries from
FROM rhysd/actionlint:${ACTIONLINT_VERSION} AS actionlint
FROM hadolint/hadolint:${HADOLINT_VERSION} AS hadolint
FROM koalaman/shellcheck:${SHELLCHECK_VERSION} AS shellcheck
FROM mvdan/shfmt:${SHFMT_VERSION} AS shfmt
FROM reviewdog/reviewdog:${REVIEWDOG_VERSION} AS reviewdog
FROM snyk/snyk:${SNYK_VERSION} as snyk

# ... base image setup ...

# Install all tools via COPY from multi-stage builds
COPY --from=hadolint /bin/hadolint /usr/local/bin/hadolint
COPY --from=actionlint /usr/local/bin/actionlint /usr/local/bin/actionlint
COPY --from=shellcheck /bin/shellcheck /usr/local/bin/shellcheck
COPY --from=shfmt /bin/shfmt /usr/local/bin/shfmt
COPY --from=reviewdog /usr/local/bin/reviewdog /usr/local/bin/reviewdog
COPY --from=snyk /usr/local/bin/snyk /usr/local/bin/snyk
Suggestion importance[1-10]: 8

__

Why: The suggestion correctly identifies an inconsistent use of installation patterns and proposes a unified, simpler, and more robust approach using multi-stage builds, which significantly improves maintainability and caching.

Medium
Security
Verify checksum during tool installation

Enhance the security of the uv installation by downloading the binary directly
and verifying its checksum, instead of executing a remote script via curl | sh.

Dockerfile [116-121]

 # Install uv
-RUN UV_VER="${UV_VERSION#v}" \
-    && UV_INSTALL_URL=$([ "${UV_VER}" = "latest" ] \
-    && echo "https://astral.sh/uv/install.sh" || \
-    echo "https://astral.sh/uv/${UV_VER}/install.sh") \
-    && curl -LsSf "${UV_INSTALL_URL}" | env UV_INSTALL_DIR="/usr/local/bin" sh
+RUN if [ "${UV_VERSION}" = "latest" ]; then \
+        UV_VER_TAG=$(curl -s "https://api.github.com/repos/astral-sh/uv/releases/latest" | jq -r '.tag_name'); \
+    else \
+        UV_VER_TAG="v${UV_VERSION#v}"; \
+    fi \
+    && ARCH_SUFFIX=$([ "${TARGETARCH}" = "arm64" ] && echo "aarch64" || echo "x86_64") \
+    && UV_ARTIFACT="uv-${ARCH_SUFFIX}-unknown-linux-gnu.tar.gz" \
+    && UV_URL="https://github.com/astral-sh/uv/releases/download/${UV_VER_TAG}/${UV_ARTIFACT}" \
+    && curl -o "/tmp/${UV_ARTIFACT}" -L "${UV_URL}" \
+    && curl -o /tmp/SHASUMS256.txt -L "https://github.com/astral-sh/uv/releases/download/${UV_VER_TAG}/SHASUMS256.txt" \
+    && (cd /tmp && grep "${UV_ARTIFACT}" SHASUMS256.txt | sha256sum -c -) \
+    && tar -xzf "/tmp/${UV_ARTIFACT}" -C /usr/local/bin uv \
+    && rm "/tmp/${UV_ARTIFACT}" /tmp/SHASUMS256.txt
  • Apply / Chat
Suggestion importance[1-10]: 8

__

Why: This suggestion significantly improves security by replacing an insecure curl | sh command with a method that downloads a binary and verifies its checksum, preventing potential supply-chain attacks.

Medium
General
Use multi-stage build for installation

Refactor the reviewdog installation to use a multi-stage build, similar to
hadolint and actionlint, for improved consistency and reliability.

Dockerfile [123-126]

 # Install reviewdog
-RUN curl -sfL "https://raw.githubusercontent.com/reviewdog/reviewdog/fd59714416d6d9a1c0692d872e38e7f8448df4fc/install.sh" \
-     | sh -s -- -b /usr/local/bin \
-    "$([ "${REVIEWDOG_VERSION}" != "latest" ] && echo "${REVIEWDOG_VERSION}" || echo "")"
+COPY --from=reviewdog /usr/local/bin/reviewdog /usr/local/bin/reviewdog
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies an inconsistency in the installation methods and proposes using a multi-stage build for reviewdog, which aligns with the pattern used for other tools in the PR and is a best practice.

Medium
  • More

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the Docker image by switching to a stable Debian base, introducing multi-stage builds, and making tool versions configurable via build arguments. The refactoring of tool installations, especially for snyk with checksum verification and reviewdog with a pinned installer script, greatly enhances security and maintainability.

I've added a few suggestions to further improve the Dockerfile:

  • Enhance the security of the uv installation to be consistent with other tools.
  • Improve the robustness of symlink creation.
  • Clean up a redundant command in the snyk installation logic.

Overall, this is an excellent enhancement to the project's development environment.

Comment on lines +117 to +121
RUN UV_VER="${UV_VERSION#v}" \
&& UV_INSTALL_URL=$([ "${UV_VER}" = "latest" ] \
&& echo "https://astral.sh/uv/install.sh" || \
echo "https://astral.sh/uv/${UV_VER}/install.sh") \
&& curl -LsSf "${UV_INSTALL_URL}" | env UV_INSTALL_DIR="/usr/local/bin" sh

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current method of installing uv by piping curl to sh from a dynamic URL (https://astral.sh/uv/install.sh) poses a security risk from supply chain attacks. This is inconsistent with the more secure methods used for reviewdog (pinning to a commit hash) and snyk (checksum verification). To mitigate this, I recommend adopting a more secure installation pattern. A better approach would be to download the uv binary directly for the target architecture and verify its checksum against a trusted source, similar to how snyk is installed.

Comment on lines +110 to +114
RUN ln -s /usr/bin/eza /usr/local/bin/ls \
&& ln -s /usr/bin/batcat /usr/local/bin/bat \
&& ln -s /usr/bin/fdfind /usr/local/bin/fd \
# Install uv:
&& curl -LsSf https://astral.sh/uv/install.sh | env UV_INSTALL_DIR="/usr/local/bin" sh \
# Install Pulumi:
&& curl -fsSL https://get.pulumi.com | sh \
&& mv /root/.pulumi/bin/pulumi /usr/local/bin \
# Install reviewdog:
&& curl -sfL https://raw.githubusercontent.com/reviewdog/reviewdog/master/install.sh \
# Make sure java runtime is found for sonarqube
&& ln -s "$(dirname "$(dirname "$(readlink -f "$(which java)")")")" "${JAVA_HOME}"

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

It's a good practice to use the -f (--force) flag with ln -s in a Dockerfile to make the command idempotent. If the symlinks already exist for any reason (e.g., from the base image or a previous layer), the build won't fail. This ensures the desired links are in place by overwriting any existing files or links at the destination.

RUN ln -sf /usr/bin/eza /usr/local/bin/ls \
    && ln -sf /usr/bin/batcat /usr/local/bin/bat \
    && ln -sf /usr/bin/fdfind /usr/local/bin/fd \
    # Make sure java runtime is found for sonarqube
    && ln -sf "$(dirname "$(dirname "$(readlink -f "$(which java)")")")" "${JAVA_HOME}"

RUN RELEASE_JSON=$(curl -s "https://downloads.snyk.io/cli/${SNYK_VERSION}/release.json") \
&& BINARY_NAME="snyk-linux$([ "${TARGETARCH}" = "arm64" ] && echo "-arm64" || echo "")" \
&& SNYK_URL=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".url") \
&& SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256" | awk '{print $1}') \

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The awk '{print $1}' here appears to be redundant. The jq -r ".assets.\"${BINARY_NAME}\".sha256" command should already extract just the SHA256 hash string from the release.json. Removing the unnecessary awk command will make the line cleaner and more direct.

    && SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256") \

@phuongfi91 phuongfi91 merged commit 945865a into latest Dec 12, 2025
11 checks passed
@phuongfi91 phuongfi91 deleted the feat/phuongfi91/switch-to-debian branch December 12, 2025 13:47
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR modernizes the development container by switching from Ubuntu 25.10 to Debian 13 as the base image and restructuring tool installations. The changes improve build reproducibility through parameterized version arguments and multi-stage builds, while removing Pulumi (to be installed per-project as needed).

Key changes:

  • Migrated base image from Ubuntu 25.10 to Debian 13 with APT preferences configured for stable/testing packages
  • Refactored tool installations to use multi-stage builds copying binaries from official upstream images (actionlint, hadolint, shellcheck, shfmt)
  • Introduced build arguments for version control of all tools (actionlint, hadolint, shellcheck, shfmt, uv, reviewdog, snyk)
  • Updated OpenJDK from version 17 to 21

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 13 comments.

File Description
Dockerfile Complete restructure: adds multi-stage builds, version parameterization via ARGs, switches to Debian base, updates OpenJDK to v21, removes Pulumi installation, adds APT testing repository configuration for watchman
tests/specs.yaml Removes Pulumi version test to align with tool no longer being pre-installed

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +59 to +68
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
echo "deb http://deb.debian.org/debian testing main" > /etc/apt/sources.list.d/testing.list \
&& apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests -t testing \
# Required for pyre vscode extension
watchman \
# Disable 'testing' repo afterwards to prevents potential issues
# where only stable packages are expected (e.g. playwright install-deps)
&& sed -i 's/^deb/#deb/' /etc/apt/sources.list.d/testing.list
Copy link

Copilot AI Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The testing repository is temporarily enabled and then disabled using sed. However, the comment on line 58 says packages from testing are "especially those that are difficult to build/install elsewhere" but only watchman is explicitly installed from testing (line 65). Consider documenting which specific packages require the testing repository, as this complex setup may confuse future maintainers who might not understand why this pattern is necessary for a single package.

Copilot uses AI. Check for mistakes.
RUN RELEASE_JSON=$(curl -s "https://downloads.snyk.io/cli/${SNYK_VERSION}/release.json") \
&& BINARY_NAME="snyk-linux$([ "${TARGETARCH}" = "arm64" ] && echo "-arm64" || echo "")" \
&& SNYK_URL=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".url") \
&& SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256" | awk '{print $1}') \
Copy link

Copilot AI Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The jq command extracts the sha256 value and pipes it through awk to get the first field, but this assumes a specific format in the JSON response. If the sha256 field is already just the hash string (without additional fields), the awk command is redundant. More importantly, if the JSON format is unexpected or the sha256 field is missing, the checksum verification on line 134 could silently pass with an empty string. Consider adding error checking to ensure SNYK_SHA256 is not empty before proceeding with the download and verification.

Suggested change
&& SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256" | awk '{print $1}') \
&& SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256") \
&& if [ -z "${SNYK_SHA256}" ] || [ "${SNYK_SHA256}" = "null" ]; then echo "Error: SNYK_SHA256 is empty or null! Aborting." >&2; exit 1; fi \

Copilot uses AI. Check for mistakes.
RUN RELEASE_JSON=$(curl -s "https://downloads.snyk.io/cli/${SNYK_VERSION}/release.json") \
&& BINARY_NAME="snyk-linux$([ "${TARGETARCH}" = "arm64" ] && echo "-arm64" || echo "")" \
&& SNYK_URL=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".url") \
&& SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256" | awk '{print $1}') \
Copy link

Copilot AI Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Snyk installation retrieves the download URL from a JSON response but doesn't validate that the URL was successfully extracted. If jq fails to find the expected path or returns null, SNYK_URL could be empty or "null", causing the curl command to fail silently or download from an invalid location. Add validation to ensure SNYK_URL is not empty before attempting the download.

Suggested change
&& SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256" | awk '{print $1}') \
&& SNYK_SHA256=$(echo "${RELEASE_JSON}" | jq -r ".assets.\"${BINARY_NAME}\".sha256" | awk '{print $1}') \
&& if [ -z "${SNYK_URL}" ] || [ "${SNYK_URL}" = "null" ]; then echo "Error: Failed to extract Snyk download URL for ${BINARY_NAME}"; exit 1; fi \

Copilot uses AI. Check for mistakes.
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV JAVA_HOME=/usr/lib/jvm/java-openjdk
ENV LANG=en_US.UTF-8
Copy link

Copilot AI Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LANG environment variable is set to en_US.UTF-8, but the required locales package or locale generation is not visible in the Dockerfile. Debian's minimal images may not have this locale pre-configured. Consider either installing and generating the locale explicitly, or using C.UTF-8 which is typically available by default in Debian containers.

Suggested change
ENV LANG=en_US.UTF-8
ENV LANG=C.UTF-8

Copilot uses AI. Check for mistakes.
echo "deb http://deb.debian.org/debian testing main" > /etc/apt/sources.list.d/testing.list \
&& apt-get update \
&& apt-get install -y --no-install-recommends --no-install-suggests -t testing \
# Required for pyre vscode extension
Copy link

Copilot AI Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment states watchman is "Required for pyre vscode extension" but pyre is not mentioned elsewhere in the Dockerfile. Consider expanding this comment to clarify if pyre is expected to be installed separately by users, or if there are other dependencies on watchman that should be documented.

Suggested change
# Required for pyre vscode extension
# watchman is required by the Pyre VSCode extension for file watching functionality.
# Note: Pyre itself is not installed by this Dockerfile; users who need Pyre should install it separately.
# Other tools or extensions that rely on watchman may also benefit from its presence.

Copilot uses AI. Check for mistakes.


# Using debian as base since it's generally stable, compatible and well supported
FROM debian:13 AS base
Copy link

Copilot AI Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The base image uses debian:13, but Debian 13 (Trixie) is currently in testing and not yet released as stable. For production use, consider using the current stable release debian:12 (Bookworm) instead, or explicitly use debian:testing if the testing version is intentional. Using an unreleased version number may cause issues when Debian 13 is officially released with potentially different package versions.

Suggested change
FROM debian:13 AS base
FROM debian:12 AS base

Copilot uses AI. Check for mistakes.
&& curl -LsSf "${UV_INSTALL_URL}" | env UV_INSTALL_DIR="/usr/local/bin" sh

# Install reviewdog
RUN curl -sfL "https://raw.githubusercontent.com/reviewdog/reviewdog/fd59714416d6d9a1c0692d872e38e7f8448df4fc/install.sh" \
Copy link

Copilot AI Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reviewdog installation uses a hardcoded commit hash (fd59714416d6d9a1c0692d872e38e7f8448df4fc) in the URL, which bypasses the REVIEWDOG_VERSION argument declared on line 14. This means users cannot control the reviewdog version through build arguments. Consider using a version-based approach similar to other tools, or if a specific commit is required for stability, add a comment explaining why the version argument is not being used.

Suggested change
RUN curl -sfL "https://raw.githubusercontent.com/reviewdog/reviewdog/fd59714416d6d9a1c0692d872e38e7f8448df4fc/install.sh" \
RUN REVIEWDOG_INSTALL_REF=$([ "${REVIEWDOG_VERSION}" = "latest" ] && echo "master" || echo "${REVIEWDOG_VERSION}") \
&& curl -sfL "https://raw.githubusercontent.com/reviewdog/reviewdog/${REVIEWDOG_INSTALL_REF}/install.sh" \

Copilot uses AI. Check for mistakes.
&& apt-get install -y --no-install-recommends --no-install-suggests \
# Required for pyre vscode extension
watchman \
# Required for sonarqube vscode extension
Copy link

Copilot AI Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OpenJDK version was upgraded from 17 to 21. While this is a positive change to use a newer LTS version, consider adding a comment explaining this upgrade, especially if it's driven by specific requirements or compatibility needs, to help future maintainers understand the decision.

Suggested change
# Required for sonarqube vscode extension
# Required for sonarqube vscode extension
# Use OpenJDK 21 (latest LTS) instead of 17 to ensure compatibility with modern Java tools and libraries.

Copilot uses AI. Check for mistakes.
&& UV_INSTALL_URL=$([ "${UV_VER}" = "latest" ] \
&& echo "https://astral.sh/uv/install.sh" || \
echo "https://astral.sh/uv/${UV_VER}/install.sh") \
&& curl -LsSf "${UV_INSTALL_URL}" | env UV_INSTALL_DIR="/usr/local/bin" sh
Copy link

Copilot AI Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Installing uv by piping curl output from astral.sh directly into sh executes remote code as root during the image build without any integrity verification. If the download endpoint, its DNS, or TLS is compromised, an attacker could run arbitrary commands in the build environment and backdoor the resulting uv binary used in CI. Instead, download the installer or binary to disk and verify it against a pinned checksum or signature (or use a package manager/pinned image) before executing it.

Copilot uses AI. Check for mistakes.
Comment on lines +124 to 125
RUN curl -sfL "https://raw.githubusercontent.com/reviewdog/reviewdog/fd59714416d6d9a1c0692d872e38e7f8448df4fc/install.sh" \
| sh -s -- -b /usr/local/bin \
Copy link

Copilot AI Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reviewdog is installed by piping a remote install.sh script from GitHub directly into sh, which again runs unverified third-party code as root during the image build. A compromise of the GitHub repository, raw content delivery, or its dependencies could inject malicious commands or ship a trojanized reviewdog binary that later runs in CI with access to repository data and tokens. Prefer downloading a specific, pinned release asset and verifying its checksum (or vendoring the installer script in this repo) instead of executing it directly from the network.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants