What is the reasoning behind the minimum requirement of 4GiB of memory? #14935
-
Community Support Policy
RabbitMQ version used4.1.2 Erlang version used27.3.x Operating system (distribution) usedAlmalinux How is RabbitMQ deployed?Community Docker image rabbitmq-diagnostics status outputSee https://www.rabbitmq.com/docs/cli to learn how to use rabbitmq-diagnostics Logs from node 1 (with sensitive values edited out)See https://www.rabbitmq.com/docs/logging to learn how to collect logs Logs from node 2 (if applicable, with sensitive values edited out)See https://www.rabbitmq.com/docs/logging to learn how to collect logs Logs from node 3 (if applicable, with sensitive values edited out)See https://www.rabbitmq.com/docs/logging to learn how to collect logs rabbitmq.confSee https://www.rabbitmq.com/docs/configure#config-location to learn how to find rabbitmq.conf file location Steps to deploy RabbitMQ clusterNot relevant Steps to reproduce the behavior in questionNot relevant advanced.configSee https://www.rabbitmq.com/docs/configure#config-location to learn how to find advanced.config file location Application code# PASTE CODE HERE, BETWEEN BACKTICKSKubernetes deployment file# Relevant parts of K8S deployment that demonstrate how RabbitMQ is deployed
# PASTE YAML HERE, BETWEEN BACKTICKSWhat problem are you trying to solve?According to the docs, RabbitMQ should not run on less than 4GiB of memory/RAM in production environments. However, we see that with our expected amounts of messages (5000msg/s with some tens of connections), it seems to run fine on way less resources (which in our case is in a virtual environment in kubernetes, on for example, 0.5GiB/0.5 cpu). Also in the documentation, we see that the Erlang garbage collector might be the reason. However, with the notes in there, this does not seem to warrant such a high memory requirement unless we reach the unlikely scenario where all queues will be GC'ed at the same time and all queues actually use memory to store their messages. So this had us wondering, why is it that the recommendation is at 4GiB? What are we overlooking? What kind of issues would we run into when we would run on less-than-minimal hardware? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
|
@gjsstigter we see people going into production with 1 core and 1 GiB of RAM way too often. You seem to be one of them if you consider 0.5 of a CPU core to be a reasonable amount for infrastructure software. The most tragic aspect of this is that workloads and the number of apps change over time but the hardware given to the cluster during the first week without any metrics data somehow stays the same until things really take a dive operationally (and then RabbitMQ gets the blame, not the capacity planners). |
Beta Was this translation helpful? Give feedback.
-
|
@gjsstigter when you capacity plan for a piece of critical infrastructure such as RabbitMQ you need to model the absolute worst case scenario (e.g. consumers / external dependencies down for hours, client apps in crash loops etc) and allocate resources according to that. Yes this typically feels wasteful to some operators who are concerned with efficient use of resources and costs but for a messaging broker that typically sits right in the middle of an architecture this is the right thing to do. |
Beta Was this translation helpful? Give feedback.
@gjsstigter we see people going into production with 1 core and 1 GiB of RAM way too often. You seem to be one of them if you consider 0.5 of a CPU core to be a reasonable amount for infrastructure software.
The most tragic aspect of this is that workloads and the number of apps change over time but the hardware given to the cluster during the first week without any metrics data somehow stays the same until things really take a dive operationally (and then RabbitMQ gets the blame, not the capacity planners).