This module deploys Shuffle, an open-source Security Orchestration, Automation and Response (SOAR) platform, on Google Cloud Platform using Terraform.
Shuffle helps security teams automate repetitive tasks and connect different security tools through a visual workflow editor. This deployment creates a highly available Docker Swarm cluster with automatic NFS configuration, OpenSearch for data storage, and load-balanced frontend/backend services.
- Multi-node Docker Swarm cluster (1-10 nodes, all as managers)
- Automatic NFS server configuration for shared storage
- OpenSearch 3.0.0 for data persistence and search
- Load-balanced services with Nginx
- Auto-scaling based on node count
- Distributed deployment across zones within a region
- Google Cloud Project with billing enabled
- Required APIs enabled:
- Compute Engine API
- Cloud Logging API (optional)
- Cloud Monitoring API (optional)
- Sufficient IAM permissions:
roles/compute.instanceAdmin.v1roles/compute.networkAdminroles/compute.securityAdminroles/iam.serviceAccountUser
module "shuffle" {
source = "./terraform"
project_id = "your-project-id"
goog_cm_deployment_name = "shuffle-deployment"
region = "us-central1"
node_count = 1
machine_type = "e2-standard-2"
shuffle_default_username = "[email protected]"
}module "shuffle" {
source = "./terraform"
project_id = "your-project-id"
goog_cm_deployment_name = "shuffle-ha-deployment"
region = "us-central1"
node_count = 3
machine_type = "e2-standard-4"
boot_disk_size = 250
boot_disk_type = "pd-ssd"
shuffle_default_username = "[email protected]"
environment = "production"
enable_cloud_logging = true
enable_cloud_monitoring = true
}module "shuffle" {
source = "./terraform"
project_id = "your-project-id"
goog_cm_deployment_name = "shuffle-prod"
region = "us-east1"
node_count = 5
machine_type = "e2-standard-4"
boot_disk_size = 500
boot_disk_type = "pd-balanced"
# Network configuration
subnet_cidr = "10.100.0.0/16"
external_access_cidrs = "203.0.113.0/24,198.51.100.0/24"
ssh_source_ranges = "203.0.113.0/24"
# Admin configuration
shuffle_default_username = "[email protected]"
# Monitoring
environment = "production"
enable_cloud_logging = true
enable_cloud_monitoring = true
}After deployment completes (approximately 10-15 minutes), access Shuffle:
-
Get the Frontend URL from outputs:
terraform output shuffle_frontend_url
-
Retrieve the admin password:
terraform output admin_password
-
Access the web interface at the displayed URL (port 3001)
-
Login with:
- Username: Your configured email address
- Password: From the output above
gcloud compute ssh shuffle-vm-manager-1 --zone=<zone>docker node ls
docker stack services shuffledocker service logs shuffle_frontend
docker service logs shuffle_backend
docker service logs shuffle_orboruscurl http://localhost:9200/_cluster/health?pretty- External Access: Only port 3001 (Shuffle Frontend) is exposed externally
- HTTPS: Port 3443 is configured internally but not exposed for security
- OpenSearch: Accessible only within the VPC (port 9200)
- NFS: Internal network communication only
- SSH Access: Configurable via
ssh_source_ranges - Firewall: Restricted by
external_access_cidrs
| Name | Description | Type | Default | Required |
|---|---|---|---|---|
| project_id | The Google Cloud project ID | string |
n/a | yes |
| goog_cm_deployment_name | Deployment name from Google Cloud Marketplace | string |
n/a | yes |
| shuffle_default_username | Default admin username for Shuffle (email format recommended) | string |
n/a | yes |
| region | The Google Cloud region for deployment (nodes will be distributed across zones within this region) | string |
"us-central1" |
no |
| node_count | Total number of nodes in the Shuffle cluster (min 1, max 10). Single node for testing, 3+ nodes for production HA. | number |
1 |
no |
| machine_type | GCP machine type for Shuffle nodes. e2-standard-2 (2 vCPUs, 8GB RAM) recommended for single node, e2-standard-4 for multi-node. | string |
"e2-standard-2" |
no |
| boot_disk_size | Boot disk size in GB | number |
120 |
no |
| boot_disk_type | Boot disk type | string |
"pd-standard" |
no |
| source_image | Source image for VMs. If empty, uses Ubuntu 22.04 LTS | string |
"" |
no |
| subnet_cidr | CIDR range for the Shuffle subnet | string |
"10.224.0.0/16" |
no |
| external_access_cidrs | Comma-separated CIDR ranges allowed to access Shuffle UI (port 3001) | string |
"0.0.0.0/0" |
no |
| enable_ssh | Enable SSH access to nodes | bool |
true |
no |
| ssh_source_ranges | Comma-separated CIDR ranges allowed for SSH access | string |
"0.0.0.0/0" |
no |
| environment | Environment label (dev, staging, production) | string |
"production" |
no |
| enable_cloud_logging | Enable Google Cloud Logging | bool |
true |
no |
| enable_cloud_monitoring | Enable Google Cloud Monitoring | bool |
true |
no |
| Name | Description |
|---|---|
| deployment_name | Name of the deployment |
| shuffle_frontend_url | URL to access Shuffle Frontend (HTTP on port 3001) |
| opensearch_internal_url | Internal URL to access OpenSearch (not exposed externally) |
| manager_instances | List of manager instance details (name, IPs, zone) |
| total_nodes | Total number of nodes in the cluster |
| manager_nodes | Number of manager nodes (same as total_nodes) |
| network_name | Name of the VPC network |
| subnet_name | Name of the subnet |
| nfs_server_ip | IP address of the NFS server (primary manager) |
| swarm_join_command_manager | Command to join swarm as manager (retrieve from primary manager) |
| swarm_join_command_worker | Command to join swarm as worker (retrieve from primary manager) |
| admin_username | Default admin username for Shuffle |
| admin_password | Default admin password for Shuffle (auto-generated, sensitive) |
| post_deployment_instructions | Instructions after deployment |
If services don't start automatically:
# SSH to primary manager
gcloud compute ssh shuffle-vm-manager-1 --zone=<zone>
# Check startup logs
cat /var/log/shuffle-startup.log
# Manually trigger deployment
cd /opt/shuffle
sudo ./deploy.sh# Check all running services
docker service ls
# Check specific service health
docker service ps shuffle_frontend --no-trunc
# View recent logs
docker service logs --tail 100 shuffle_backend# Test OpenSearch
curl http://localhost:9200/_cluster/health
# Test Frontend
curl http://localhost:3001/api/v1/health
# Check NFS mounts
showmount -e localhostTo scale the cluster:
- Update
node_countin your Terraform configuration - Run
terraform apply - New nodes will automatically join the swarm
Note: Scaling down requires manual node removal:
docker node rm <node-name>- Database: OpenSearch data stored on NFS (
/srv/nfs/shuffle-database) - Applications: App data on NFS (
/srv/nfs/shuffle-apps) - Files: User files on NFS (
/srv/nfs/shuffle-files)
# Create snapshot of boot disks
gcloud compute disks snapshot <disk-name> --zone=<zone>
# Backup NFS data
tar -czf shuffle-backup-$(date +%F).tar.gz /srv/nfs/To upgrade Shuffle:
# SSH to primary manager
gcloud compute ssh shuffle-vm-manager-1 --zone=<zone>
# Pull latest images
docker service update --image ghcr.io/shuffle/shuffle-frontend:latest shuffle_frontend
docker service update --image ghcr.io/shuffle/shuffle-backend:latest shuffle_backend
docker service update --image ghcr.io/shuffle/shuffle-orborus:latest shuffle_orborus- CPU: 2 vCPUs
- RAM: 8 GB
- Disk: 120 GB
- Machine Type: e2-standard-2
- CPU: 4 vCPUs per node
- RAM: 16 GB per node
- Disk: 250+ GB per node
- Machine Type: e2-standard-4 or higher
- Shuffle Documentation: https://shuffler.io/docs
- GitHub Issues: https://github.com/Shuffle/Shuffle/issues
- Community Discord: https://discord.gg/B2CBzUm
- Support Email: [email protected]
| Name | Version |
|---|---|
| terraform | >= 1.0 |
| ~> 5.0 | |
| random | ~> 3.1 |
| null | ~> 3.0 |
| Name | Version |
|---|---|
| ~> 5.0 | |
| random | ~> 3.1 |
| null | ~> 3.0 |