Performance comparison between Spring Boot and Quarkus
This project contains the following modules:
- springboot3
- A Spring Boot 3.x version of the application
- quarkus3
- A Quarkus 3.x version of the application
- quarkus3-spring-compatibility
- A Quarkus 3.x version of the application using the Spring compatibility layer. You can also recreate this application from the spring application using a few manual steps.
Each module can be built using
./mvnw clean verifyYou can also run ./mvnw clean verify at the project root to build all modules.
-
(macOS) You need to have a
timeoutcompatible command:- Via
coreutils(installed via Homebrew):brew install coreutilsbut note that this will install lots of GNU utils that will duplicate native commands and prefix them withg(e.g.gdate) - Use this implementation via Homebrew:
brew install aisk/homebrew-tap/timeout - More options at https://stackoverflow.com/questions/3504945/timeout-command-on-mac-os-x
- Via
-
Base JVM Version: 21
The application expects a PostgreSQL database to be running on localhost:5432. You can use Docker or Podman to start a PostgreSQL container:
cd scripts
./infra.sh -sThis will start the database, create the required tables and populate them with some data.
To stop the database:
cd scripts
./infra.sh -dThere are some scripts available to help you run the application:
1strequest.sh- Runs an application X times and computes the time to 1st request and RSS for each iteration as well as an average over the X iterations.
run-requests.sh- Runs a set of requests against a running application.
infra.sh- Starts/stops required infrastructure
Of course you want to start generating some numbers and doing some comparisons, that's why you're here! There are lots of wrong ways to run benchmarks, and running them reliably requires a controlled environment, strong automation, and multiple machines. Realistically, that kind of setup isn't always possible.
Here's a range of options, from easiest to best practice. Remember that the easy setup will not be particularly accurate, but it does sidestep some of the worst pitfalls of casual benchmarking.
Before we go any further, know that this kind of test is not going to be reliable. Laptops usually have a number of other processes running on them, and modern laptop CPUs are subject to power management which can wildly skew results. Often, some cores are 'fast' and some are 'slow', and without extra care, you don't know which core your test is running on. Thermal management also means 'fast' jobs get throttled, while 'slow' jobs might run at their normal speed.
Load shouldn't be generated on the same machine as the one running the workload, because the work of load generation can interfere with what's being measured.
But if you accept all that, and know these results should be treated with caution, here's our recommendation for the least-worst way of running a quick and dirty test. We use Hyperfoil instead of wrk, to avoid coordinated omission issues. For simplicity, we use the wrk2 Hyperfoil bindings.
You can run these in any order.
scripts/stress.sh quarkus3/target/quarkus-app/quarkus-run.jar
scripts/stress.sh quarkus3-spring-compatibility/target/quarkus-app/quarkus-run.jar
scripts/stress.sh springboot3/target/springboot3.jarFor each test, you should see output like
Thread Stats Avg Stdev Max +/- Stdev
Latency 9.58ms 6.03ms 94.90ms 85.57%
Req/Sec 9936.90 2222.61 10593.00 95.24These scripts are being developed.
These tests are run on a regular schedule in Red Hat/IBM performance labs. The results are available in an internal Horreum instance. We are working on publishing these externally.
- Why Quarkus is Fast: https://quarkus.io/performance/
- How the Quarkus team measure performance (and some anti-patterns to be aware of): https://quarkus.io/guides/performance-measure