Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
def66c7
use node 24 for ctst linting step
SylvainSenechal Mar 11, 2026
96964e8
use new platformatic kafka lib for ctst
SylvainSenechal Mar 11, 2026
ff910e6
replace kafkajs library with platformatic kafka
SylvainSenechal Mar 11, 2026
48cd059
remove node gyp following removal of node rd kafka
SylvainSenechal Mar 11, 2026
1c28503
remove useless tools previously used for node rd kafka in ctst docker…
SylvainSenechal Mar 11, 2026
3181b29
bump cucumber to 12.7
SylvainSenechal Mar 11, 2026
3a3ee3b
bump node image to 24 in ctst dockerfile following cucumber bump to 1…
SylvainSenechal Mar 11, 2026
fb58dee
bump kubernetes client to 1.4.0
SylvainSenechal Mar 11, 2026
f53c60d
bump cli-testing to 1.3.0
SylvainSenechal Mar 11, 2026
f4a47d5
seed keycloak from cli testing before all following 1.3.0 bump
SylvainSenechal Mar 11, 2026
c87cbd8
update cucumber formatter
SylvainSenechal Mar 11, 2026
fdf1042
fix infinite loop on verify object location
SylvainSenechal Mar 11, 2026
a886db7
fix 403 auth check
SylvainSenechal Mar 11, 2026
a6027a0
add missing cold storage tag to test
SylvainSenechal Mar 11, 2026
fb03315
pin azure core client following issues with mismatched azure client l…
SylvainSenechal Mar 11, 2026
15e45c8
increase sorbetclt limit to 10000 from default 100, as it was found t…
SylvainSenechal Mar 11, 2026
0d79acb
increase notification test reliability by checking for kafka connecto…
SylvainSenechal Mar 11, 2026
75924d8
remove useless azure archive test
SylvainSenechal Mar 11, 2026
3a83924
improve uniqueness of object names in tests to avoid collision and im…
SylvainSenechal Mar 11, 2026
4925585
use limit 300 for sorbetctl
SylvainSenechal Mar 19, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 9 additions & 7 deletions .github/scripts/end2end/run-e2e-ctst.sh
Original file line number Diff line number Diff line change
Expand Up @@ -156,13 +156,6 @@ E2E_IMAGE=$E2E_CTST_IMAGE_NAME:$E2E_IMAGE_TAG
POD_NAME="${ZENKO_NAME}-ctst-tests"
CTST_VERSION=$(sed 's/.*"cli-testing": ".*#\(.*\)".*/\1/;t;d' ../../../tests/ctst/package.json)

# Configure keycloak
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seed keycloak is now done in a before all from Cli-testing

docker run \
--rm \
--network=host \
"${E2E_IMAGE}" /bin/bash \
-c "SUBDOMAIN=${SUBDOMAIN} CONTROL_PLANE_INGRESS_ENDPOINT=${OIDC_ENDPOINT} ACCOUNT=${ZENKO_ACCOUNT_NAME} KEYCLOAK_REALM=${KEYCLOAK_TEST_REALM_NAME} STORAGE_MANAGER=${STORAGE_MANAGER_USER_NAME} STORAGE_ACCOUNT_OWNER=${STORAGE_ACCOUNT_OWNER_USER_NAME} DATA_CONSUMER=${DATA_CONSUMER_USER_NAME} DATA_ACCESSOR=${DATA_ACCESSOR_USER_NAME} /ctst/node_modules/cli-testing/bin/seedKeycloak.sh"; [[ $? -eq 1 ]] && exit 1 || echo 'Keycloak Configured!'

# Grant access to Kube API (insecure, only for testing)
kubectl create clusterrolebinding serviceaccounts-cluster-admin \
--clusterrole=cluster-admin \
Expand All @@ -178,6 +171,14 @@ kubectl run $POD_NAME \
--attach=True \
--image-pull-policy=IfNotPresent \
--env=TARGET_VERSION=$VERSION \
--env=ACCOUNT=${ZENKO_ACCOUNT_NAME} \
--env=STORAGE_MANAGER=${STORAGE_MANAGER_USER_NAME} \
--env=STORAGE_ACCOUNT_OWNER=${STORAGE_ACCOUNT_OWNER_USER_NAME} \
--env=DATA_CONSUMER=${DATA_CONSUMER_USER_NAME} \
--env=DATA_ACCESSOR=${DATA_ACCESSOR_USER_NAME} \
--env=SEED_KEYCLOAK_DEFAULT_ROLES=true \
--env=KEYCLOAK_HOST=${KEYCLOAK_TEST_HOST} \
--env=KEYCLOAK_REALM=${KEYCLOAK_TEST_REALM_NAME} \
--env=AZURE_BLOB_URL=$AZURE_BACKEND_ENDPOINT \
--env=AZURE_QUEUE_URL=$AZURE_BACKEND_QUEUE_ENDPOINT \
--env=VERBOSE=1 \
Expand Down Expand Up @@ -226,5 +227,6 @@ kubectl run $POD_NAME \
--parallel $PARALLEL_RUNS \
--retry 3 \
--retry-tag-filter @Flaky \
--format pretty \
--format junit:/reports/ctst-junit.xml \
--format html:/reports/report.html
2 changes: 1 addition & 1 deletion .github/workflows/end2end.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ jobs:
GIT_ACCESS_TOKEN: ${{ steps.app-token.outputs.token }}
- uses: actions/setup-node@v6
with:
node-version: '20'
node-version: '24'
cache: yarn
cache-dependency-path: tests/ctst/yarn.lock
- name: Install ctst test dependencies
Expand Down
9 changes: 1 addition & 8 deletions tests/ctst/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ ARG DRCTL_TAG=latest
FROM $SORBET_IMAGE:$SORBET_TAG AS sorbet
FROM $DRCTL_IMAGE:$DRCTL_TAG AS drctl

FROM node:22.19.0-bookworm-slim
FROM node:24.14.0-bookworm-slim

ARG AWSCLI_VERSION=2.17.39

Expand All @@ -20,19 +20,12 @@ WORKDIR /ctst
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
apt-utils \
python3 \
build-essential \
ssh \
git \
curl \
unzip \
jq \
ca-certificates \
librdkafka-dev \
zlib1g-dev \
libssl-dev \
libffi-dev \
libzstd-dev \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean

Expand Down
32 changes: 27 additions & 5 deletions tests/ctst/common/common.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ import { CacheHelper, Constants, Identity, IdentityEnum, S3, Utils } from 'cli-t
import Zenko from 'world/Zenko';
import { parseGoDuration, safeJsonParse } from './utils';
import assert from 'assert';
import { Admin, Kafka } from 'kafkajs';
import { Admin } from '@platformatic/kafka';
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switched the kafka library
Before we were using kafkajs, and node-rdkafka.

These internally relied on librdkafka, a C library. Because of that, we needed to setup quite a few extra stuff to make them work : node-gyp (which was throwing weird python errors when doing yarn install in the Codespace), and also need a bunch of things to be installed in the dockerfile (you'll see later in the pr, librd-XXX).
Now this library is pure js, it removes some dependency, and also it will be compatible with Bun if we wanna use it later

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

notes:

  • such comments would be better in the PR comment, so reviewers know before even reading the code
  • would be good to keep such change in a separate PR (even if it means stacking multiple PRs) - not saying to change it now (too late), but as a goal for next changes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the kafka change!

import {
createBucketWithConfiguration,
putMpuObject,
Expand Down Expand Up @@ -99,8 +99,24 @@ async function addUserMetadataToObject(this: Zenko, objectName: string | undefin
async function getTopicsOffsets(topics: string[], kafkaAdmin: Admin) {
const offsets = [];
for (const topic of topics) {
const partitions: ({ high: string; low: string; })[] =
await kafkaAdmin.fetchTopicOffsets(topic);
const metadata = await kafkaAdmin.metadata({ topics: [topic] });
const partitionCount = metadata.topics.get(topic)?.partitionsCount ?? 0;
const partitionIndexes = Array.from({ length: partitionCount }, (_, i) => ({
partitionIndex: i,
timestamp: BigInt(-2),
}));
const earliestResult = await kafkaAdmin.listOffsets({
topics: [{ name: topic, partitions: partitionIndexes }],
});
const latestResult = await kafkaAdmin.listOffsets({
topics: [{ name: topic, partitions: partitionIndexes.map(p => ({ ...p, timestamp: BigInt(-1) })) }],
});
const partitions = [];
for (let i = 0; i < partitionCount; i++) {
const low = earliestResult[0]?.partitions.find(p => p.partitionIndex === i)?.offset ?? BigInt(0);
const high = latestResult[0]?.partitions.find(p => p.partitionIndex === i)?.offset ?? BigInt(0);
Comment on lines +116 to +117
Copy link
Contributor

@francoisferrand francoisferrand Mar 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

partitions are lists, right? So searching is not efficient...

  • are partitions "ordered" predictably?
  • do earliest and latest have the same order?
  • ...could we not just do a "zip" merge, something like:
for (let i = 0; i < partitionCount; i++) {
   partitions.push({ low: earliestResult[0]?.partitions[I], high: latestResult[0]?.partitions[I] });

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we don't have guarantee about the order, but the list should be small enough that in practice this find is not too impactfull

partitions.push({ low: String(low), high: String(high) });
}
offsets.push({ topic, partitions });
}
return offsets;
Expand Down Expand Up @@ -307,10 +323,15 @@ Then('kafka consumed messages should not take too much place on disk', { timeout
assert.fail('Kafka cleaner did not clean the topics within the expected time');
}, checkInterval * 10); // Timeout after 10 Kafka cleaner intervals

const kafkaAdmin = new Admin({
clientId: 'ctst-kafka-cleaner-check',
bootstrapBrokers: [this.parameters.KafkaHosts],
});

try {
const ignoredTopics = ['dead-letter'];
const kafkaAdmin = new Kafka({ brokers: [this.parameters.KafkaHosts] }).admin();
const topics: string[] = (await kafkaAdmin.listTopics())
const allTopics = await kafkaAdmin.listTopics();
const topics: string[] = allTopics
.filter(t => (t.includes(this.parameters.InstanceID) &&
!ignoredTopics.some(e => t.includes(e))));

Expand Down Expand Up @@ -370,6 +391,7 @@ Then('kafka consumed messages should not take too much place on disk', { timeout
assert(topics.length === 0, `Topics ${topics.join(', ')} still have not been cleaned`);
} finally {
clearTimeout(timeoutID);
await kafkaAdmin.close();
}
});

Expand Down
4 changes: 4 additions & 0 deletions tests/ctst/common/hooks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,10 @@ import {
cleanupAccount,
} from './utils';

import 'cli-testing/hooks/KeycloakSetup';
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have BeforeAll in Cli-testing, no import => they are not run
Imo should be in this repo but its a story for another day

import 'cli-testing/hooks/Logger';
import 'cli-testing/hooks/versionTags';

// HTTPS should not cause any error for CTST
process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0';

Expand Down
3 changes: 1 addition & 2 deletions tests/ctst/cucumber.config.cjs
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,7 @@ module.exports = {
require: ['steps/**/*.ts', 'common/**/*.ts', 'world/**/*.ts'],
paths: ['features/**/*.feature'],
format: [
'progress-bar',
'@cucumber/pretty-formatter',
'pretty',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are we removing @cucumber/pretty-formatter ?
seems a good option to auto-format .freature files?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty formatter is archived (https://github.com/cucumber/cucumber-js-pretty-formatter?tab=readme-ov-file), It's integrated directly in cucumber now (thats why i removed pretty formatter in cli-testing), and pretty formatter is now just "pretty" : https://github.com/cucumber/cucumber-js/blob/main/docs/formatters.md

'json:reports/cucumber-report.json',
'html:reports/report.html',
],
Expand Down
Loading
Loading