docker-compose.yml: n8n-import deletes the credentials in localhost n8n #75
Replies: 2 comments
-
|
That command should only import the credentials. Did you change the originals, or did you add them? Maybe it works better if you rename the credentials when you change them? |
Beta Was this translation helpful? Give feedback.
-
|
sudo nano /opt/n8n/data/config look at this record and check ENCRYPTION_KEY: nano ~/n8n/docker-compose.yml look at this row and check ENCRYPTION_KEY:
You should update both sides exactly as they appear in your nano ~/n8n/docker-compose.yml file. Your encryption keys must be exactly the same. You must check for single and double quotes. Pay attention to the quotes on both sides, including (") or ('). They must be the same. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm new to this and needed some time to find this out, but docker-compose.yml did things I don't find desirable.
The command under n8n-import deletes the credentials you put in n8n localhost upon each start-up:
n8n-import: <<: *service-n8n hostname: n8n-import container_name: n8n-import entrypoint: /bin/sh command: - "-c" - "n8n import:credentials --separate --input=/demo-data/credentials && n8n import:workflow --separate --input=/demo-data/workflows" volumes: - ./n8n/demo-data:/demo-data depends_on: postgres: condition: service_healthySo I commented out the command lines in question and added a new command line:
n8n-import: <<: *service-n8n hostname: n8n-import container_name: n8n-import entrypoint: /bin/sh # command: # - "-c" # - "n8n import:credentials --separate --input=/demo-data/credentials && n8n import:workflow --separate --input=/demo-data/workflows" command: ["sleep", "infinity"] # container's runnin but imports nuthin, slay volumes: - ./n8n/demo-data:/demo-data depends_on: postgres: condition: service_healthyAlso, the x-init-ollama command pulled llama3.2 on each start-up - which in my case was bad, since I don't have a lot of VRAM and am trying to run smaller models on it, so I changed that one, too:
x-init-ollama: &init-ollama # ... command: ["sleep", "infinity"] # - "-c" # - "sleep 3; ollama pull llama3.2"Hope this helps sb else!
Beta Was this translation helpful? Give feedback.
All reactions