-
-
Notifications
You must be signed in to change notification settings - Fork 587
Description
Testcontainers version
0.39.0
Using the latest Testcontainers version?
Yes
Host OS
Linux
Host arch
x64
Go version
1.24.6
Docker version
Client: Docker Engine - Community
Version: 28.1.1
API version: 1.41 (downgraded from 1.49)
Go version: go1.23.8
Git commit: 4eba377
Built: Fri Apr 18 09:52:29 2025
OS/Arch: linux/amd64
Context: default
Server: linux/amd64/debian-unknown
Podman Engine:
Version: 5.4.2
APIVersion: 5.4.2
Arch: amd64
BuildTime: 2025-07-08T21:45:18Z
Experimental: false
GitCommit:
GoVersion: go1.24.4
KernelVersion: 6.12.35-1rodete1-amd64
MinAPIVersion: 4.0.0
Os: linux
Conmon:
Version: conmon version 2.1.12, commit: unknown
Package: conmon_2.1.12-4_amd64
OCI Runtime (runc):
Version: runc version 1.2.5
commit: v1.2.5-0-g59923ef
spec: 1.2.0
go: go1.23.7
libseccomp: 2.6.0
Package: containerd.io_1.7.27-1_amd64
Engine:
Version: 5.4.2
API version: 1.41 (minimum version 1.24)
Go version: go1.24.4
Git commit:
Built: Tue Jul 8 21:45:18 2025
OS/Arch: linux/amd64
Experimental: falseDocker info
Client: Docker Engine - Community
Version: 28.1.1
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.23.0
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.30.1
Path: /[...]/home/polarczyk/.docker/cli-plugins/docker-compose
Server:
Containers: 81
Running: 1
Paused: 0
Stopped: 80
Images: 2
Server Version: 5.4.2
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Using metacopy: false
Supports shifting: false
Supports volatile: true
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge macvlan ipvlan
Log: k8s-file none passthrough journald
Swarm: inactive
Runtimes: crun-vm crun-wasm kata runc runsc youki crun krun ocijail runj
Default Runtime: runc
Init Binary:
containerd version:
runc version:
init version:
Security Options:
apparmor
seccomp
Profile: default
rootless
Kernel Version: 6.12.35-1rodete1-amd64
Operating System: debian
OSType: linux
Architecture: amd64
CPUs: 24
Total Memory: 94.29GiB
Name: polarczyk.c.googlers.com
ID: a107fe65-de51-4ad3-bcad-dedc3ff9df4a
Docker Root Dir: /[...]/home/polarczyk/.local/share/containers/storage
Debug Mode: false
Experimental: true
Live Restore Enabled: false
Product License: Apache-2.0What happened?
When using Podman (even with reusable containers) the startup time for a test is long, around 12s on average. Tested with Docker, this time is in milliseconds.
I've done some investigation and I assume that communication with Podman in itself is slow, but I found there is a log, that takes over 5s, being 50% of the total time. Removing this log significantly improves the developer experience when running a single test.
Here are the logs from before the change, time annotated:
2025-10-13T14:42:06.119468638Z: Running Test
2025-10-13T14:42:06.119544362Z: provider.go:123 - GetProvider - Getting Podman provider
2025-10-13T14:42:06.119550348Z: provider.go:134 - NewDockerProvider - Entering NewDockerProvider()
2025-10-13T14:42:08.688498747Z: docker_client.go:43 - Info - Entering Info()
2025/10/13 14:42:16 github.com/testcontainers/testcontainers-go - Connected to docker:
Server Version: 5.4.2
API Version: 1.41
Operating System: debian
Total Memory: 96555 MB
Testcontainers for Go Version: v0.39.0
Resolved Docker Host: unix:///run/user/1360621/podman/podman.sock
Resolved Docker Socket Path: /run/user/1360621/podman/podman.sock
Test SessionID: 43b345006bb22e17803242bbbad62aa1be3a28abeab83400070bfa4983b50ef1
Test ProcessID: b1ce92a5-42fd-4f27-a6ea-07371b1f906a
2025-10-13T14:42:16.319430256Z: docker_client.go:67 - Info - Exiting Info()
2025-10-13T14:42:16.319475087Z: docker.go:1327 - ReuseOrCreateContainer - Entering ReuseOrCreateContainer
2025/10/13 14:42:16 ✅ Container started: a51ead6f9f3c
2025/10/13 14:42:16 ⏳ Waiting for container id a51ead6f9f3c image: docker.io/library/postgres:16-alpine. Waiting for: &{timeout:<nil> deadline:0xc000b2e540 Strategies:[0xc0000ff200 0xc0010b3980]}
2025/10/13 14:42:16 🔔 Container is ready: a51ead6f9f3c
And here are the logs after the change:
2025-10-13T14:42:55.768227221Z: provider.go:123 - GetProvider - Getting Podman provider
2025-10-13T14:42:55.768232798Z: provider.go:134 - NewDockerProvider - Entering NewDockerProvider()
2025-10-13T14:42:58.296600686Z: docker_client.go:43 - Info - Entering Info()
2025-10-13T14:43:01.210343414Z: docker_client.go:67 - Info - Exiting Info()
2025-10-13T14:43:01.210594013Z: docker.go:1327 - ReuseOrCreateContainer - Entering ReuseOrCreateContainer
2025/10/13 14:43:01 ✅ Container started: a51ead6f9f3c
2025/10/13 14:43:01 ⏳ Waiting for container id a51ead6f9f3c image: docker.io/library/postgres:16-alpine. Waiting for: &{timeout:<nil> deadline:0xc000a7db60 Strategies:[0xc001397440 0xc000e37c50]}
2025/10/13 14:43:01 🔔 Container is ready: a51ead6f9f3c
I assume much more optimization like caching could be done, but removing this log for Podman runs (or depending on log level) would be sufficient for now for us, improving the total initialization time from 11 to 6s
Relevant log output
Additional information
No response