-
|
There is little documentation (but Google searches and ChatGPT research both turn up a lot of conflicting information) on Java/OrientDB memory configuration parameters. I'm hoping to get some feedback from real-world experience on this. The relevant settings I've "derived" for a 32G and a 64G server are shown below. I welcome any comments, questions, or feedback, including suggestions of other settings that should be considered. 32G Server 64G Server Notes
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
|
I'm running OrientDB in a docker container (orientdb:3.2.38) on a VM with 4CPU 32G of memory, wondering if there are any suggestions for this configuration? |
Beta Was this translation helpful? Give feedback.
-
|
We're running 32GB production instances with the following config:
We pre-size Java heap ( This is quite a conservative sizing intentionally, and we're getting around nowish to instrumenting disk cache and direct memory allocation so we can track them in our metrics and push them a bit more. Aside from the usual OOM caveats (e.g. the exact JVM and Linux overheads are somewhat unknowable, we have swap configured at low priority just in case Linux decides to get a bit over-eager on the OOM killer, and we're protecting the OrientDB processes from being OOM killer victims), there are some OrientDB specific issues to be careful of, specifically that the write cache is per-database, but it's sized as a proportion of the (server scoped) disk cache configuration. e.g. 1000MB disk cache at 25% write cache = 750GB of read cache for the server, but 250MB write cache per DB, so if you put 100 DBs on the server, the max write cache size can be more than you intended or can handle. |
Beta Was this translation helpful? Give feedback.
We're running 32GB production instances with the following config:
We pre-size Java heap (
-Xmxand-Xms), and pre-touch (-XX:+AlwaysPreTouch) and pre-allocate (-Dmemory.directMemory.preallocate=true) direct memory to avoid surprises, but don't limit direct memory allocation at the JVM level.This is quite a conservative sizing intentionally, and we're getting around nowish to instrumenting disk cache and direct memory allocation so we can track them in our metrics and push them a bit more.
This has been pretty reliable wrt OOM killer issues and overall performance running under significant transaction loads.
Aside from the usual OOM caveats (e.g. the exact…