diff --git a/docs/deployment/assets/integration-manager/composer-required.png b/docs/deployment/assets/integration-manager/composer-required.png new file mode 100644 index 0000000..283e101 Binary files /dev/null and b/docs/deployment/assets/integration-manager/composer-required.png differ diff --git a/docs/deployment/assets/integration-manager/ee-required.png b/docs/deployment/assets/integration-manager/ee-required.png new file mode 100644 index 0000000..64aea76 Binary files /dev/null and b/docs/deployment/assets/integration-manager/ee-required.png differ diff --git a/docs/deployment/ecosystem/collectors.md b/docs/deployment/ecosystem/collectors.md index fa58171..6eed870 100644 --- a/docs/deployment/ecosystem/collectors.md +++ b/docs/deployment/ecosystem/collectors.md @@ -4,46 +4,33 @@ If you want to learn more about the concept and features of collectors, you can have more info [here](../../usage/collectors.md). -## Installation +!!! question "Collectors list" -### External (Python) collectors + You are looking for the available collectors? The list is in the [OpenAEV Ecosystem](https://filigran.notion.site/OpenAEV-Ecosystem-30d8eb73d7d04611843e758ddef8941b). -#### Configuration -All external collectors have to be able to access the OpenAEV API. To allow this connection, they have 2 mandatory configuration parameters, the `OPENAEV_URL` and the `OPENAEV_TOKEN`. In addition to these 2 parameters, collectors have other mandatory parameters that need to be set in order to get them work. +## Installing a collector -!!! info "Collector tokens" +There are multiple ways to deploy a collector from OpenAEV: - You can use your administrator token or create another administrator service account to put in your collectors. It is not necessary to have one dedicated user for each collector. +- Integration Manager (Recommended) +- Docker deployment +- Manual deployment -Here is an example of a collector `docker-compose.yml` file: -```yaml -- OPENAEV_URL=http://localhost -- OPENAEV_TOKEN=ChangeMe -- COLLECTOR_ID=ChangeMe # Specify a valid UUIDv4 of your choice -- "COLLECTOR_NAME=MITRE ATT&CK" -- COLLECTOR_LOG_LEVEL=error -``` +!!! info -Here is an example in a collector `config.yml` file: + All collectors require access to the OpenAEV API. See [Configuration](#configuration) for required parameters. -```yaml -openaev: - url: 'http://localhost:3001' - token: 'ChangeMe' +### Integration Manager (Recommended) +The easiest way to deploy collectors is through the Integration Manager, which allows automatic deployment directly from the OpenAEV interface. -collector: - id: 'ChangeMe' - name: 'MITRE ATT&CK' - log_level: 'info' -``` - -## Docker activation +πŸ‘‰ See the [Integration Manager documentation](integration-manager/overview.md) for detailed instructions. -You can either directly run the Docker image of collectors or add them to your current `docker-compose.yml` file. -### Add a collector to your deployment +### Docker Deployment +Several options are available for Docker deployment: +#### Add a collector to your existing deployment For instance, to enable the MITRE ATT&CK collector, you can add a new service to your `docker-compose.yml` file: ```docker @@ -57,10 +44,10 @@ For instance, to enable the MITRE ATT&CK collector, you can add a new service to - COLLECTOR_LOG_LEVEL=error restart: always ``` +Note: Collector images and available versions can be found on Docker Hub. -### Launch a standalone collector - -To launch standalone collector, you can use the `docker-compose.yml` file of the collector itself. Just download the latest [release](https://github.com/OpenAEV-Platform/collectors/releases) and start the collector: +#### Launch a standalone collector +To launch a standalone collector, you can use the `docker-compose.yml` file of the collector itself. Just download the latest [release](https://github.com/OpenAEV-Platform/collectors/releases) and start the collector: ``` $ wget https://github.com/OpenAEV-Platform/collectors/archive/{RELEASE_VERSION}.zip @@ -74,9 +61,8 @@ Change the configuration in the `docker-compose.yml` according to the parameters $ docker compose up ``` -## Manual activation - -If you want to manually launch collector, you just have to install Python 3 and pip3 for dependencies: +### Manual deployment +If you want to manually launch collector without docker, you just have to install Python 3 and pip3 for dependencies: ``` $ apt install python3 python3-pip @@ -97,12 +83,59 @@ $ pip3 install -r requirements.txt $ cp config.yml.sample config.yml ``` -Change the `config.yml` content according to the parameters of the platform and of the targeted service and launch the collector: +Change the `config.yml` content according to the parameters of the platform and of the targeted service. +For example : + +```yaml + +openaev: + url: 'http://localhost:3001' + token: 'ChangeMe' + +collector: + id: 'ChangeMe' + name: 'MITRE ATT&CK' + log_level: 'info' + +``` + + +Finally : launch the collector: ``` $ python3 openaev_mitre.py ``` +### Configuration + +All external collectors have to be able to access the OpenAEV API. To allow this connection, they have 2 mandatory configuration parameters, the `OPENAEV_URL` and the `OPENAEV_TOKEN`. In addition to these 2 parameters, collectors have other mandatory parameters that need to be set to make them work. + +!!! info "Collector tokens" + + You can use your administrator token or create another administrator service account to put in your collectors. It is not necessary to have one dedicated user for each collector. + +Here is an example of a collector `docker-compose.yml` file: +```yaml +- OPENAEV_URL=http://localhost +- OPENAEV_TOKEN=ChangeMe +- COLLECTOR_ID=ChangeMe # Specify a valid UUIDv4 of your choice +- "COLLECTOR_NAME=MITRE ATT&CK" +- COLLECTOR_LOG_LEVEL=error +``` + +Here is an example in a collector `config.yml` file: + +```yaml +openaev: + url: 'http://localhost:3001' + token: 'ChangeMe' + +collector: + id: 'ChangeMe' + name: 'MITRE ATT&CK' + log_level: 'info' +``` + ## Collectors status The collector status can be displayed in the dedicated section of the platform available in Integration > collectors. You will be able to see the statistics of the RabbitMQ queue of the collector: diff --git a/docs/deployment/ecosystem/injectors.md b/docs/deployment/ecosystem/injectors.md index a224de3..9f7878c 100644 --- a/docs/deployment/ecosystem/injectors.md +++ b/docs/deployment/ecosystem/injectors.md @@ -17,53 +17,26 @@ just add the proper configuration parameters in your platform configuration. ### External (Python) injectors -#### Configuration +There are multiple ways to deploy an external injectors from OpenAEV: -All external injectors have to be able to access the OpenAEV API. To allow this connection, they have 2 mandatory configuration parameters, the `OPENAEV_URL` and the `OPENAEV_TOKEN`. In addition to these 2 parameters, injectors have other mandatory parameters that need to be set in order to get them work. +- Integration Manager (Recommended) +- Docker deployment +- Manual deployment -!!! info "Injector tokens" +!!! info - You can use your administrator token or create another administrator service account to put in your injectors. It is not necessary to have one dedicated user for each injector. + ⚠️ All external injectors must be able to access the OpenAEV API. They require 2 mandatory configuration parameters: OPENAEV_URL and OPENAEV_TOKEN. In addition, each collector has specific mandatory parameters that need to be configured. -Here is an example of a injector `docker-compose.yml` file: -```yaml -- OPENAEV_URL=http://localhost -- OPENAEV_TOKEN=ChangeMe -- INJECTOR_ID=ChangeMe # Specify a valid UUIDv4 of your choice -- "INJECTOR_NAME=HTTP query" -- INJECTOR_LOG_LEVEL=error -``` +#### Integration Manager (Recommended) +The easiest way to deploy injectors is through the Integration Manager, which allows automatic deployment directly from the OpenAEV interface. -Here is an example in a injector `config.yml` file: +πŸ‘‰ See the [Integration Manager documentation](integration-manager/overview.md) for detailed instructions. -```yaml -openaev: - url: 'http://localhost:3001' - token: 'ChangeMe' -injector: - id: 'ChangeMe' - name: 'HTTP query' - log_level: 'info' -``` - -#### Networking - -Be aware that all injectors are reaching RabbitMQ based the RabbitMQ configuration provided by the OpenAEV platform. The injector must be able to reach RabbitMQ on the specified hostname and port. If you have a specific Docker network configuration, please be sure to adapt your `docker-compose.yml` file in such way that the injector container gets attached to the OpenAEV Network, e.g.: - -```yaml -networks: - default: - external: true - name: openaev-docker_default -``` - -## Docker activation - -You can either directly run the Docker image of injectors or add them to your current `docker-compose.yml` file. - -### Add an injector to your deployment +#### Docker Deployment +Several options are available for Docker deployment: +##### Add an injector to your existing deployment For instance, to enable the HTTP query injector, you can add a new service to your `docker-compose.yml` file: ```docker @@ -77,9 +50,9 @@ For instance, to enable the HTTP query injector, you can add a new service to yo - INJECTOR_LOG_LEVEL=error restart: always ``` +Note: Injector images and available versions can be found on Docker Hub. -### Launch a standalone injector - +##### Launch a standalone collector To launch standalone injector, you can use the `docker-compose.yml` file of the injector itself. Just download the latest [release](https://github.com/OpenAEV-Platform/injectors/releases) and start the injector: ``` @@ -94,7 +67,7 @@ Change the configuration in the `docker-compose.yml` according to the parameters $ docker compose up ``` -## Manual activation +#### Manual activation If you want to manually launch injector, you just have to install Python 3 and pip3 for dependencies: @@ -117,12 +90,66 @@ $ pip3 install -r requirements.txt $ cp config.yml.sample config.yml ``` -Change the `config.yml` content according to the parameters of the platform and of the targeted service and launch the injector: +Change the `config.yml` content according to the parameters of the platform and of the targeted service. +For example : +```yaml +openaev: + url: 'http://localhost:3001' + token: 'ChangeMe' + +injector: + id: 'ChangeMe' + name: 'HTTP query' + log_level: 'info' +``` + +Finally : launch the injector: ``` $ python3 openaev_http.py ``` +#### Configuration + +All external injectors have to be able to access the OpenAEV API. To allow this connection, they have 2 mandatory configuration parameters, the `OPENAEV_URL` and the `OPENAEV_TOKEN`. In addition to these 2 parameters, injectors have other mandatory parameters that need to be set in order to get them work. + +!!! info "Injector tokens" + + You can use your administrator token or create another administrator service account to put in your injectors. It is not necessary to have one dedicated user for each injector. + +Here is an example of a injector `docker-compose.yml` file: +```yaml +- OPENAEV_URL=http://localhost +- OPENAEV_TOKEN=ChangeMe +- INJECTOR_ID=ChangeMe # Specify a valid UUIDv4 of your choice +- "INJECTOR_NAME=HTTP query" +- INJECTOR_LOG_LEVEL=error +``` + +Here is an example in a injector `config.yml` file: + +```yaml +openaev: + url: 'http://localhost:3001' + token: 'ChangeMe' + +injector: + id: 'ChangeMe' + name: 'HTTP query' + log_level: 'info' +``` + +#### Networking + +Be aware that all injectors are reaching RabbitMQ based the RabbitMQ configuration provided by the OpenAEV platform. The injector must be able to reach RabbitMQ on the specified hostname and port. If you have a specific Docker network configuration, please be sure to adapt your `docker-compose.yml` file in such way that the injector container gets attached to the OpenAEV Network, e.g.: + +```yaml +networks: + default: + external: true + name: openaev-docker_default +``` + ## Injectors status The injector status can be displayed in the dedicated section of the platform available in Integration > injectors. You will be able to see the statistics of the RabbitMQ queue of the injector: diff --git a/docs/deployment/ecosystem/integration-manager/configuration.md b/docs/deployment/ecosystem/integration-manager/configuration.md new file mode 100644 index 0000000..1910924 --- /dev/null +++ b/docs/deployment/ecosystem/integration-manager/configuration.md @@ -0,0 +1,175 @@ +# Configuration reference + +XTM Composer uses a layered configuration system with support for YAML files and environment variables. Environment variables override file-based configuration. + +## Configuration priority + +1. Environment variables (highest priority) +2. Environment-specific config file (e.g., `production.yaml`) +3. Default config file (`default.yaml`) + +## Environment variable format + +All environment variables use double underscores (`__`) to separate nested configuration levels. + +Example: `manager.logger.level` becomes `MANAGER__LOGGER__LEVEL` + +## Platform + +### Manager + +#### Basic parameters + +| Parameter | Environment variable | Default value | Description | +|:-----------------------------------|:----------------------------------------|:------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------| +| manager:id | MANAGER__ID | default-manager-id | Unique identifier for this manager instance | +| manager:name | MANAGER__NAME | Filigran integration manager | Human-readable name for the manager | +| manager:execute_schedule | MANAGER__EXECUTE_SCHEDULE | 10 | Interval in seconds between execution cycles | +| manager:ping_alive_schedule | MANAGER__PING_ALIVE_SCHEDULE | 60 | Interval in seconds between alive ping messages | +| manager:credentials_key | MANAGER__CREDENTIALS_KEY | | RSA private key content (4096-bit recommended). Use for direct key embedding. One of `credentials_key` or `credentials_key_filepath` is required | +| manager:credentials_key_filepath | MANAGER__CREDENTIALS_KEY_FILEPATH | | Path to RSA private key file. Takes priority over `credentials_key` if both are set. One of `credentials_key` or `credentials_key_filepath` is required | + +#### Logging + +| Parameter | Environment variable | Default value | Description | +|:----------------------------|:--------------------------------|:--------------|:--------------------------------------------------------------------| +| manager:logger:level | MANAGER__LOGGER__LEVEL | info | Logging verbosity level (`trace`, `debug`, `info`, `warn`, `error`) | +| manager:logger:format | MANAGER__LOGGER__FORMAT | json | Log output format (`json`, `pretty`) | +| manager:logger:directory | MANAGER__LOGGER__DIRECTORY | `true` | Enable logging to directory/file | +| manager:logger:console | MANAGER__LOGGER__CONSOLE | `true` | Enable logging to console/stdout | + +#### Debug + +| Parameter | Environment variable | Default value | Description | +|:---------------------------------------|:-----------------------------------------|:--------------|:------------------------------------------------------------------------| +| manager:debug:show_env_vars | MANAGER__DEBUG__SHOW_ENV_VARS | `false` | Display environment variables at startup (excludes sensitive data) | +| manager:debug:show_sensitive_env_vars | MANAGER__DEBUG__SHOW_SENSITIVE_ENV_VARS | `false` | Display sensitive environment variables at startup (tokens, keys, etc.) | + +### Dependencies + +#### OpenAEV + +| Parameter | Environment variable | Default value | Description | +|:---------------------------------|:----------------------------------|:------------------------------------|:-------------------------------------------------| +| openaev:enable | OPENAEV__ENABLE | `false` | Enable OpenAEV integration (Coming Soon) | +| openaev:url | OPENAEV__URL | http://host.docker.internal:4000 | OpenAEV platform URL (Coming Soon) | +| openaev:token | OPENAEV__TOKEN | ChangeMe | OpenAEV API authentication token (Coming Soon) | +| openaev:unsecured_certificate | OPENAEV__UNSECURED_CERTIFICATE | `false` | Allow self-signed SSL certificates (Coming Soon) | +| openaev:with_proxy | OPENAEV__WITH_PROXY | `false` | Use system proxy settings (Coming Soon) | +| openaev:logs_schedule | OPENAEV__LOGS_SCHEDULE | 10 | Log report interval in seconds (Coming Soon) | + +#### Proxy configuration + +| Parameter | Environment variable | Default value | Description | +|:----------------------------------|:---------------------------------|:--------------|:--------------------------------------------------------------------------------------------------| +| http_proxy | HTTP_PROXY | | Proxy URL for HTTP requests (e.g., `http://proxy:8080`) | +| https_proxy | HTTPS_PROXY | | Proxy URL for HTTPS requests (e.g., `http://proxy:8080`) | +| no_proxy | NO_PROXY | | Comma-separated list of hosts excluded from proxy (e.g., `localhost,127.0.0.1,internal.domain`) | +| https_proxy_ca | HTTPS_PROXY_CA | | CA certificates used to validate HTTPS proxy connections | +| https_proxy_reject_unauthorized | HTTPS_PROXY_REJECT_UNAUTHORIZED | `false` | If not false, validates the proxy certificate against the provided CA list | + +!!! note "Proxy certificate separation" + + Proxy TLS certificates are **independent** from OpenAEV HTTPS server certificates. + + - For proxy connections β†’ use `https_proxy_ca` and `https_proxy_reject_unauthorized` + - For OpenAEV platform HTTPS β†’ use `app:https_cert:*` variables in the main OpenAEV configuration + +### Registry authentication + +| Parameter | Environment variable | Default value | Description | +|:------------------------------|:-----------------------------|:--------------|:----------------------------------------------------------------------------| +| registry:enable | REGISTRY__ENABLE | `false` | Enable authentication to a container registry | +| registry:url | REGISTRY__URL | | Registry endpoint (e.g., `https://registry.hub.docker.com`) | +| registry:username | REGISTRY__USERNAME | | Username for registry authentication | +| registry:password | REGISTRY__PASSWORD | | Password or token for registry authentication | +| registry:cache_ttl | REGISTRY__CACHE_TTL | 3600 | Time (in seconds) for caching registry authorization tokens | + +!!! note "Authentication cache" + + Composer caches registry authentication tokens to reduce the number of login requests. + Tokens are refreshed automatically when expired. + +### Orchestration + +#### General settings + +| Parameter | Environment variable | Default value | Description | +|:----------------------------------|:----------------------------------------|:--------------|:-----------------------------------------------------------------------| +| openaev:daemon:selector | OPENAEV__DAEMON__SELECTOR | kubernetes | Container orchestration platform (`kubernetes`, `docker`, `portainer`) | + +#### Kubernetes + +| Parameter | Environment variable | Default value | Description | +|:------------------------------------------------|:--------------------------------------------------------------|:--------------|:----------------------------------------------------------| +| openaev:daemon:kubernetes:image_pull_policy | OPENAEV__DAEMON__KUBERNETES__IMAGE_PULL_POLICY | IfNotPresent | Image pull policy (`Always`, `IfNotPresent`, `Never`) | +| openaev:daemon:kubernetes:base_deployment | Not supported for complex objects | | Base Kubernetes Deployment manifest template | +| openaev:daemon:kubernetes:base_deployment_json | OPENAEV__DAEMON__KUBERNETES__BASE_DEPLOYMENT_JSON | | Base Deployment manifest as JSON string | + +#### Docker + +| Parameter | Environment variable | Default value | Description | +|:-----------------------------------|:---------------------------------------|:--------------|:---------------------------------------------------------------| +| openaev:daemon:docker:extra_hosts | OPENAEV__DAEMON__DOCKER__EXTRA_HOSTS | | Additional hosts entries for containers (array) | +| openaev:daemon:docker:network_mode | OPENAEV__DAEMON__DOCKER__NETWORK_MODE | bridge | Docker network mode (`bridge`, `host`, `none`, or custom) | +| openaev:daemon:docker:dns | OPENAEV__DAEMON__DOCKER__DNS | | Custom DNS servers for containers (array) | +| openaev:daemon:docker:privileged | OPENAEV__DAEMON__DOCKER__PRIVILEGED | `false` | Run containers in privileged mode | +| openaev:daemon:docker:cap_add | OPENAEV__DAEMON__DOCKER__CAP_ADD | | Linux capabilities to add (array) | +| openaev:daemon:docker:cap_drop | OPENAEV__DAEMON__DOCKER__CAP_DROP | | Linux capabilities to drop (array) | +| openaev:daemon:docker:shm_size | OPENAEV__DAEMON__DOCKER__SHM_SIZE | | Shared memory size in bytes | + +#### Portainer + +| Parameter | Environment variable | Default value | Description | +|:--------------------------------------|:-----------------------------------------|:-----------------------------------|:-------------------------------------------------------| +| openaev:daemon:portainer:api | OPENAEV__DAEMON__PORTAINER__API | https://host.docker.internal:9443 | Portainer API endpoint URL | +| openaev:daemon:portainer:api_key | OPENAEV__DAEMON__PORTAINER__API_KEY | ChangeMe | Portainer API authentication key | +| openaev:daemon:portainer:env_id | OPENAEV__DAEMON__PORTAINER__ENV_ID | 3 | Portainer environment ID | +| openaev:daemon:portainer:env_type | OPENAEV__DAEMON__PORTAINER__ENV_TYPE | docker | Portainer environment type (`docker`, `kubernetes`) | +| openaev:daemon:portainer:api_version | OPENAEV__DAEMON__PORTAINER__API_VERSION | v1.41 | Docker API version for Portainer | +| openaev:daemon:portainer:stack | OPENAEV__DAEMON__PORTAINER__STACK | | Portainer stack name for deployment | +| openaev:daemon:portainer:network_mode | OPENAEV__DAEMON__PORTAINER__NETWORK_MODE | | Network mode for Portainer-managed containers | + +## Environment configuration + +| Parameter | Environment variable | Default value | Description | +|:----------|:---------------------|:--------------|:-------------------------------------------------------------------------------| +| - | COMPOSER_ENV | production | Specifies which configuration file to load (e.g., `development`, `production`) | + +## Complete configuration example + +```yaml +# config/production.yaml +manager: + id: prod-manager-001 + name: Production XTM Manager + execute_schedule: 10 + ping_alive_schedule: 60 + credentials_key_filepath: /keys/private_key_4096.pem + logger: + level: info + format: json + directory: true + console: false + +openaev: + enable: true + url: https://openaev.example.com + token: ${OPENAEV_TOKEN} # Reference env variable + unsecured_certificate: false + with_proxy: false + logs_schedule: 10 + daemon: + selector: kubernetes + kubernetes: + image_pull_policy: IfNotPresent +``` + +## Security best practices + +1. **Never commit credentials**: Use environment variables or secure secret management +2. **Use file-based keys**: Prefer `credentials_key_filepath` over embedding keys +3. **Restrict file permissions**: Set key files to `600` permissions +4. **Rotate tokens regularly**: Update API tokens periodically +5. **Use TLS/SSL**: Always use HTTPS in production +6. **Limit debug output**: Disable `show_sensitive_env_vars` in production diff --git a/docs/deployment/ecosystem/integration-manager/installation.md b/docs/deployment/ecosystem/integration-manager/installation.md new file mode 100644 index 0000000..64a038c --- /dev/null +++ b/docs/deployment/ecosystem/integration-manager/installation.md @@ -0,0 +1,313 @@ +# Installation guide + +## System requirements + +### Runtime requirements + +#### Production environment +- **Kubernetes**: v1.24 or higher +- **Namespace**: Dedicated namespace for XTM Composer +- **RBAC**: Role-based access control for pod management + +#### Development environment +- **Docker**: v20.10 or higher +- **Portainer**: v2.0 or higher (recommended for container management) +- **Docker Compose**: v2.0 or higher (optional) + +### Security requirements + +- **RSA Private Key**: 4096-bit RSA private key for authentication +- **Network Access**: + - Connectivity to OpenCTI/OpenAEV instances + - Access to container orchestration API +- **Permissions**: + - Production: Kubernetes service account with appropriate RBAC + - Development: Docker socket access or Portainer API access + +## Installation methods + +Create a configuration file based on your environment or add extra environment variables in the following steps. +See [Configuration Reference](configuration.md) for more information on required configuration. + +## Production environment (Kubernetes) + +Note: The Kubernetes installation method described here assumes that OpenAEV is already deployed on a Kubernetes cluster. + +1. Create namespace: +```bash +kubectl create namespace xtm-composer +``` + +2. Create secret for RSA key: +```bash +# Generate key +openssl genrsa -out private_key_4096.pem 4096 + +# Create secret +kubectl create secret generic xtm-composer-keys \ + --from-file=private_key.pem=private_key_4096.pem \ + -n xtm-composer +``` + +3. Create ConfigMap for configuration: +```bash +kubectl create configmap xtm-composer-config \ + --from-file=default.yaml=config/default.yaml \ + -n xtm-composer +``` + +4. Create service account: + +XTM Composer uses a service account to have authorization to start new pods and deployments on the cluster. + +```bash +cat < Catalog** +- Use the search bar to find collectors, injectors and executors by name or description. You can also apply filters (e.g., by collector, executor or injector type). +- If a collector, injector or executor has already been deployed, a **badge** will appear on its **Deploy** button. + +## Deploying a collector, injector or executor +1. Click the **Deploy** button on an external collector, injector or executor card or from the detail view. A form will appear with required configuration fields. +2. Fill in the required options (you can also expand **Advanced options** to configure additional settings) + +!!! warning "Configuration information" + + - **Instance names**: Two instances can't share the same name + - **Validation error**: If a duplicate name is detected, a blocking error will prevent deployment until a unique name is provided + - **Confidence level**: set the desired confidence level for the service account. + - **API key** (encrypted and securely stored). + - **Additional options**: collector, injector or executor specific configuration. + +![instance form](../../assets/integration-manager/instance-form-sample.png) + +3. Click **Create**. Once the collector is created, you will be redirected to the collector instance view. + +!!! note "Instance created" + + Newly created external collectors, injectors or executors are not started automatically. + You can still update their configuration via the **Update** action. + + +5. When ready, click **Start** to run it. +6. From the instance view, you can also check the **Logs** tab. The displayed logs depend on the logging level configured. + + +## Managing the instances + +- Different injector, collector or executor types are identified in the catalog: + - External : Injector, collector or executor managed by the integration manager + - built-in : Injector, collector or executor natively integrated into the core platform no additional deployment required +- Instances statuses: + - Managed instances: *Started* or *Stopped*. +- Only **managed instances** can be started/stopped from the UI. They are also the only ones that provide logs in the interface. +- Updating the configuration of a managed instance is possible. Changes will take effect after a short delay. +- For security reasons, API keys/tokens are encrypted when saved and never displayed in the UI afterward. diff --git a/docs/deployment/ecosystem/integration-manager/proxy-configuration.md b/docs/deployment/ecosystem/integration-manager/proxy-configuration.md new file mode 100644 index 0000000..e051b21 --- /dev/null +++ b/docs/deployment/ecosystem/integration-manager/proxy-configuration.md @@ -0,0 +1,107 @@ +# Proxy Support + +## Overview + +XTM Composer can use system proxy settings for outgoing network calls. + +### YAML configuration + +```yaml +openaev: + daemon: + with_proxy: true +``` + +### Environment variable configuration + +```bash +export OPENAEV__DAEMON__WITH_PROXY="true" +export HTTP_PROXY="http://proxy.example.com:8080" +export HTTPS_PROXY="http://proxy.example.com:8080" +export NO_PROXY="localhost,127.0.0.1,.example.com" +``` + +When enabled, the Integration Manager automatically applies the proxy settings to: + +- Docker API calls +- Kubernetes image pulls +- Portainer API requests + +## HTTPS Proxy Certificate Support (optional) + +Some environments use HTTPS proxies with TLS interception (for example, corporate proxies or debugging proxies like Burp). +In these cases, additional certificate settings may be required. + +### Environment variables + +```bash +export HTTPS_PROXY_CA='["/path/to/proxy-ca.pem"]' +export HTTPS_PROXY_REJECT_UNAUTHORIZED="false" +``` + + - HTTPS_PROXY_CA β€” List of CA certificates (file paths or PEM blocks) used to validate the proxy’s certificate. + - HTTPS_PROXY_REJECT_UNAUTHORIZED β€” If set to "false", certificate validation is disabled for proxy connections (default behavior). + +### Important: Certificate Scope Clarification + +Composer distinguishes two independent certificate configurations: + +| Purpose | Keys | Description | +|-----------------------------------|-------------------------------------------------------|------------------------------------------------------------------| +| OpenAEV HTTPS server certificates | app.https_cert.ca, app.https_cert.reject_unauthorized | TLS configuration for the OpenAEV web server | +| Proxy HTTPS certificates | https_proxy_ca, https_proxy_reject_unauthorized | Validation settings for HTTPS connections made through the proxy | + +These settings must not be mixed. + +### Proxy Configuration in config.json + +Example of equivalent configuration in a JSON file: + +```json +{ + "http_proxy": "http://proxy.example.com:8080", + "https_proxy": "http://proxy.example.com:8080", + "no_proxy": "localhost,127.0.0.1,internal.domain", + "https_proxy_ca": ["/path/to/proxy-ca.pem"], + "https_proxy_reject_unauthorized": false +} +``` + + +## Certificate Separation + +⚠️ **Important**: Proxy certificates are separate from OpenAEV server certificates. + +| Purpose | Configuration Keys | Used For | +|---------------------------------|-------------------------------------------------------------|----------------------------------------------------| +| **Proxy certificates** | `https_proxy_ca`
`https_proxy_reject_unauthorized` | Validating HTTPS connections **through the proxy** | +| **OpenAEV server certificates** | `app:https_cert:ca`
`app:https_cert:reject_unauthorized` | TLS for the OpenAEV web server itself | + +**Do not confuse these two configurations.** + +--- + +### Troubleshooting - Collector, executor or injector integration + +### Automatic Injection + +When proxy is enabled, XTM Composer automatically injects these environment variables into all managed collectors, injectors and executors containers: + +- `HTTP_PROXY` +- `HTTPS_PROXY` +- `NO_PROXY` +- `HTTPS_CA_CERTIFICATES` (when `https_proxy_ca` is configured) + +### Verification + +To verify that proxy settings are correctly injected, call the following API endpoint: + +``` +GET /api/connector-instances/{instance-id} +``` + +Replace `{instance-id}` with your connector instance ID. + + + +See also: [Private Registry Authentication](registry-authentification.md) \ No newline at end of file diff --git a/docs/deployment/ecosystem/integration-manager/quick-start.md b/docs/deployment/ecosystem/integration-manager/quick-start.md new file mode 100644 index 0000000..dc39221 --- /dev/null +++ b/docs/deployment/ecosystem/integration-manager/quick-start.md @@ -0,0 +1,201 @@ +# Quick start guide + +This guide will help you get XTM Composer up and running quickly with OpenAEV. + +## Prerequisites + +Before starting, ensure you have: +- XTM Composer installed (see [Installation Guide](installation.md)) +- Access to an OpenAEV instance +- OpenAEV API token +- RSA private key (4096-bit) + +## Step 1: Generate RSA private key + +Generate a 4096-bit RSA private key for authentication: + +```bash +openssl genrsa -out private_key_4096.pem 4096 +``` + +## Step 2: Basic configuration + +Create a configuration file based on your environment. + +### Option A: Using Configuration File + +Create `config/production.yaml`: + +```yaml +manager: + id: "my-manager-001" + name: "Production Manager" + credentials_key_filepath: "/path/to/private_key_4096.pem" + logger: + level: info + format: json + +openaev: + enable: true + url: "https://openaev.example.com" + token: "your-openaev-api-token" + daemon: + selector: kubernetes # or 'docker' or 'portainer' +``` + +### Option B: Using Environment Variables + +Set configuration through environment variables: + +```bash +export COMPOSER_ENV=production +export MANAGER__ID="my-manager-001" +export MANAGER__CREDENTIALS_KEY_FILEPATH="/path/to/private_key_4096.pem" +export OPENAEV__URL="https://openaev.example.com" +export OPENAEV__TOKEN="your-openaev-api-token" +export OPENAEV__DAEMON__SELECTOR="kubernetes" +``` + +## Step 3: Choose your orchestration platform + +### For Kubernetes + +```yaml +openaev: + daemon: + selector: kubernetes + kubernetes: + image_pull_policy: IfNotPresent +``` + +### For Docker + +```yaml +openaev: + daemon: + selector: docker + docker: + network_mode: bridge +``` + +**Note**: Docker mode requires socket access: +```bash +docker run -v /var/run/docker.sock:/var/run/docker.sock ... +``` + +### For Portainer + +```yaml +openaev: + daemon: + selector: portainer + portainer: + api: "https://portainer.example.com:9443" + api_key: "your-portainer-api-key" + env_id: "3" + env_type: "docker" +``` + +## Step 4: Run XTM Composer + +### Using Docker + +```bash +docker run -d \ + --name xtm-composer \ + -v $(pwd)/config:/config \ + -v $(pwd)/private_key_4096.pem:/keys/private_key.pem \ + -e COMPOSER_ENV=production \ + filigran/xtm-composer:latest +``` + +### Using Binary + +```bash +COMPOSER_ENV=production ./xtm-composer +``` + +## Step 5: Verify connection + +Check the logs to verify XTM Composer is connected to OpenAEV: + +```bash +# Docker +docker logs xtm-composer + +# Binary/Systemd +tail -f /var/log/xtm-composer/composer.log +``` + +You should see messages like: +``` +INFO Starting XTM Composer +INFO Connecting to OpenAEV at https://openaev.example.com +INFO Successfully connected to OpenAEV +INFO Manager registered with ID: my-manager-001 +``` + +## Step 6: Verify in OpenAEV + +1. Log into your OpenAEV instance +2. Navigate to **Integrations > Catalog** +3. You should not see any alert message indicating that the Integration Manager installation is required. + +## Common configuration examples + +### Development environment + +```yaml +manager: + id: "dev-manager" + credentials_key_filepath: "./private_key_4096.pem" + logger: + level: debug + format: pretty + console: true + debug: + show_env_vars: true + +openaev: + enable: true + url: "http://localhost:4000" + token: "development-token" + daemon: + selector: docker +``` + +### Production with high availability + +```yaml +manager: + id: "prod-manager-ha" + execute_schedule: 5 # Check every 5 seconds + ping_alive_schedule: 30 # Ping every 30 seconds + logger: + level: warn + format: json + directory: true + console: false + +openaev: + enable: true + url: "https://openaev.prod.example.com" + token: "${OPENAEV_TOKEN}" # Use environment variable + logs_schedule: 5 + daemon: + selector: kubernetes + kubernetes: + image_pull_policy: Always +``` + +## Troubleshooting + +For common issues and their solutions, see the [Troubleshooting Guide](troubleshooting.md). + +## Next steps + +- Review the complete [Configuration Reference](configuration.md) +- Set up monitoring and alerting +- Configure collector, injector or executor specific settings +- Implement security best practices +- Join the OpenAEV community for support diff --git a/docs/deployment/ecosystem/integration-manager/registry-authentification.md b/docs/deployment/ecosystem/integration-manager/registry-authentification.md new file mode 100644 index 0000000..cc57a37 --- /dev/null +++ b/docs/deployment/ecosystem/integration-manager/registry-authentification.md @@ -0,0 +1,129 @@ +# Private Registry + +## Overview + +XTM Composer supports the deployment of containers from both public and private Docker registries. +Registry authentication is configured through the OpenAEV daemon settings and automatically applied by the Integration Manager during collector, injector or executor deployment. + +This page explains how to configure: + +- Configuration for private Docker registries +- Kubernetes automatic secret creation +- Registry prefix resolution + +--- + +## Configuration + +The Integration Manager automatically uses the registry configuration defined under `openaev.daemon.registry`. +No additional configuration is required inside Composer. + +```yaml +openaev: + daemon: + registry: + server: "registry.example.com" # Default: docker.io + username: "myuser" # Required for Kubernetes auto-creation + password: "mypassword" # Required for Kubernetes auto-creation + email: "user@example.com" # Optional +``` + +### Environment Variables + +```bash +export OPENAEV__DAEMON__REGISTRY__SERVER="registry.example.com" +export OPENAEV__DAEMON__REGISTRY__USERNAME="myuser" +export OPENAEV__DAEMON__REGISTRY__PASSWORD="mypassword" +export OPENAEV__DAEMON__REGISTRY__EMAIL="user@example.com" # Optional +``` + +### Required Fields + +- **server**: Registry URL (defaults to `docker.io` if not specified) +- **username**: Registry username (required for Kubernetes secret creation) +- **password**: Registry password (required for Kubernetes secret creation) +- **email**: User email (optional) + +--- + +## Kubernetes Secret Auto-Creation + +When using the **Kubernetes orchestrator**, XTM Composer automatically creates an `imagePullSecret` at startup if credentials are configured. + +### Behavior + +**With credentials configured:** + +1. At startup, the orchestrator deletes any existing secret named `openaev-registry-auth` +2. Creates a new secret with your credentials +3. Automatically attaches this secret to deployed collector, injector or executor pods + +**Without credentials:** + +- No secret is created +- You can manually create and configure your own secret if needed + +### Secret Details + +- **Name**: `openaev-registry-auth` (hardcoded) +- **Type**: `kubernetes.io/dockerconfigjson` +- **Lifecycle**: Recreated on each startup if credentials present + +### Expected Startup Logs + +``` +INFO orchestrator="kubernetes" secret="openaev-registry-auth" Deleting existing imagePullSecret if present +INFO orchestrator="kubernetes" secret="openaev-registry-auth" server="registry.example.com" Creating imagePullSecret for private registry +INFO orchestrator="kubernetes" secret="openaev-registry-auth" Successfully created imagePullSecret +``` + +### Required Kubernetes Permissions + +Your ServiceAccount must have these permissions: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: xtm-composer-role +rules: +- apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "list", "create", "delete"] +``` + +### Troubleshooting + +**Secret creation fails:** + +- Check that your ServiceAccount has the required RBAC permissions +- Verify credentials are correct +- Check startup logs for error messages + +**Pods can't pull images:** + +- Verify the secret exists: `kubectl get secret openaev-registry-auth` +- Check secret is attached to pods: `kubectl describe pod ` +- Ensure registry server is accessible from the cluster + +--- + +## Registry Prefix Resolution + +The Integration Manager automatically handles registry prefixes in image names: + +- If the image name already includes the registry, it will not prepend anything. +- If no registry is included, the `server` from the registry configuration is automatically prefixed. +- This prevents double-prefixing and ensures images are pulled from the correct registry. + +Example: + +```yaml +# Image without prefix +image: "openaev/collector-example:1.0.0" + +# After resolution +image: "registry.example.com/openaev/collector-example:1.0.0" +``` + +See also: [Proxy Support](proxy-configuration.md) \ No newline at end of file diff --git a/docs/deployment/ecosystem/integration-manager/troubleshooting.md b/docs/deployment/ecosystem/integration-manager/troubleshooting.md new file mode 100644 index 0000000..f29879e --- /dev/null +++ b/docs/deployment/ecosystem/integration-manager/troubleshooting.md @@ -0,0 +1,336 @@ +# Troubleshooting guide + +This guide provides solutions to common issues you may encounter while installing, configuring, and running XTM Composer. + +## Installation issues + +### Post-installation verification + +After installing XTM Composer, verify the installation is successful by checking these components based on your environment. + +#### Development environment + +##### Docker verification +```bash +# Check container status +docker ps | grep xtm-composer + +# View logs +docker logs xtm-composer + +# Test connectivity +docker exec xtm-composer curl -s http://localhost:8080/health +``` + +##### Portainer verification +1. Access Portainer dashboard +2. Navigate to Containers or Stacks +3. Check XTM Composer status (should show as "running") +4. Click on the container to view logs and statistics + +#### Production environment + +##### Kubernetes verification +```bash +# Check pod status +kubectl get pods -n xtm-composer + +# View deployment status +kubectl get deployment -n xtm-composer + +# Check logs +kubectl logs -n xtm-composer deployment/xtm-composer + +# Verify service account permissions +kubectl auth can-i --list --as=system:serviceaccount:xtm-composer:xtm-composer -n xtm-composer +``` + +#### Common verification steps + +Regardless of environment, verify: + +1. **RSA Key**: Ensure the private key is properly mounted and accessible +2. **Configuration**: Confirm configuration files are loaded correctly +3. **Network**: Test connectivity to OpenCTI/OpenAEV instances +4. **Logs**: Check for any error messages or warnings + +## Connection issues + +If XTM Composer cannot connect to OpenAEV: + +### 1. Verify URL and token + +Test the connection directly using curl: +```bash +curl -H "Authorization: Bearer YOUR_TOKEN" https://openaev.example.com/api/settings/version +``` + +If this fails, check: +- The URL is correct and accessible +- The token is valid +- No proxy or firewall is blocking the connection + +### 2. Check network connectivity + +Verify basic network connectivity: +```bash +# Test DNS resolution +ping openaev.example.com + +# Test port connectivity +nc -zv openaev.example.com 443 +``` + +If connectivity fails: +- Check DNS configuration +- Verify firewall rules +- Ensure the service is running on the expected port + +### 3. SSL certificate issues + +For self-signed certificates, you can temporarily set `unsecured_certificate: true` in your configuration: + +```yaml +openaev: + unsecured_certificate: true +``` + +**Warning**: This is not recommended for production environments. Instead: +- Add the certificate to your trusted store +- Use a valid certificate from a trusted CA + +## Authentication failures + +### 1. Verify RSA key + +Check that your RSA key is valid: +```bash +openssl rsa -in private_key_4096.pem -check +``` + +Expected output: +``` +RSA key ok +``` + +If the key is invalid: +- Regenerate the key: `openssl genrsa -out private_key_4096.pem 4096` +- Ensure it's in PKCS#8 PEM format +- Verify the key size is 4096 bits + +### 2. Check file permissions + +Ensure proper permissions on the private key file: +```bash +chmod 600 private_key_4096.pem +ls -la private_key_4096.pem +``` + +The file should be readable only by the owner. + +### 3. Verify key path + +Confirm the path in your configuration matches the actual key location: + +```yaml +manager: + credentials_key_filepath: "/path/to/private_key_4096.pem" +``` + +For Docker deployments, ensure the volume mount is correct: +```bash +docker run -v /local/path/key.pem:/keys/private_key.pem ... +``` + +## Orchestration issues + +### Kubernetes issues + +#### Verify cluster access +```bash +# Check cluster connectivity +kubectl cluster-info + +# Verify permissions +kubectl auth can-i create deployments +``` + +If access is denied: +- Check RBAC configuration +- Verify service account permissions +- Ensure the kubeconfig is properly configured + +#### Common Kubernetes errors + +**"pods is forbidden"**: The service account lacks necessary permissions +- Solution: Apply the correct RBAC configuration (see Installation Guide) + +**"no such host"**: Kubernetes API server cannot be reached +- Solution: Check the cluster endpoint configuration + +### Docker issues + +#### Check socket permissions +```bash +# Verify Docker is accessible +docker info + +# Check socket permissions +ls -la /var/run/docker.sock +``` + +If permission denied: +- Add user to docker group: `sudo usermod -aG docker $USER` +- For container access, mount the socket: `-v /var/run/docker.sock:/var/run/docker.sock` + +#### Common Docker errors + +**"Cannot connect to Docker daemon"**: Docker socket not accessible +- Solution: Ensure Docker is running and socket is properly mounted + +**"Network not found"**: Specified network doesn't exist +- Solution: Create the network or update configuration + +### Portainer issues + +#### Test API access +```bash +curl -H "X-API-Key: YOUR_KEY" https://portainer.example.com/api/endpoints +``` + +If this fails: +- Verify the API key is correct +- Check the Portainer URL and port +- Ensure the environment ID is correct + +## Runtime issues + +### Container health monitoring + +XTM Composer monitors container health and can detect various runtime issues: + +#### Reboot loop detection + +If a container restarts more than 3 times within 5 minutes, XTM Composer detects a reboot loop. Check: + +1. **Container Logs**: Review logs for startup errors + ```bash + # Docker + docker logs container_name + + # Kubernetes + kubectl logs pod_name -n namespace + ``` + +2. **Configuration Issues**: Verify all required environment variables are set +3. **Resource Limits**: Check if the container has sufficient resources +4. **Image Availability**: Ensure the Docker image exists and is accessible + +### Log collection issues + +XTM Composer collects logs every `logs_schedule` interval. If logs are missing: + +1. Verify the schedule configuration: + ```yaml + openaev: + logs_schedule: 10 # seconds + ``` + +2. Check container log availability: + ```bash + # Docker + docker logs --tail 50 container_name + + # Kubernetes + kubectl logs --tail 50 pod_name + ``` + +3. Ensure the orchestrator has permissions to read logs + +## Configuration issues + +### Environment variable problems + +If configuration via environment variables isn't working: + +1. **Check Variable Format**: Use double underscores for nested values + - Correct: `MANAGER__LOGGER__LEVEL=debug` + - Incorrect: `MANAGER.LOGGER.LEVEL=debug` + +2. **Verify Variable Loading**: Enable debug mode to see loaded variables + ```yaml + manager: + debug: + show_env_vars: true + ``` + +3. **Priority Issues**: Remember environment variables override file configuration + +### Configuration file not loading + +If your configuration file isn't being loaded: + +1. **Check COMPOSER_ENV**: Ensure it matches your file name + ```bash + export COMPOSER_ENV=production # Loads config/production.yaml + ``` + +2. **Verify File Location**: Configuration files should be in `/config` directory +3. **Check YAML Syntax**: Validate your YAML file for syntax errors + +## Logging and debugging + +### Enable debug logging + +For detailed troubleshooting, enable debug logging: + +```yaml +manager: + logger: + level: debug + console: true + format: pretty +``` + +Or via environment variable: +```bash +export MANAGER__LOGGER__LEVEL=debug +``` + +### View logs + +Check logs to identify issues: + +```bash +# Docker +docker logs -f xtm-composer + +# Kubernetes +kubectl logs -f deployment/xtm-composer -n xtm-composer + +# Binary/File-based +tail -f /var/log/xtm-composer/composer.log +``` + +### Common log messages + +**"Successfully connected to OpenAEV"**: Connection established successfully + +**"Failed to connect to platform"**: Check connection settings and network + +**"Manager registered with ID"**: XTM Composer successfully registered + +**"Invalid authentication"**: Check API token and credentials + +**"Reboot loop detected"**: Container is continuously restarting + +## Getting help + +If you continue to experience issues: + +1. **Check the logs** with debug level enabled +2. **Review the configuration** for any misconfigurations +3. **Verify network connectivity** between all components +4. **Consult the openAEV community** for additional support + +For bug reports and feature requests, visit the [GitHub repository](https://github.com/FiligranHQ/xtm-composer.git). diff --git a/mkdocs.yml b/mkdocs.yml index 900af51..16a2d8f 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -122,6 +122,15 @@ nav: - Executors: deployment/ecosystem/executors.md - Injectors: deployment/ecosystem/injectors.md - Collectors: deployment/ecosystem/collectors.md + - Integration Manager: + - Overview: deployment/ecosystem/integration-manager/overview.md + - Quick start: deployment/ecosystem/integration-manager/quick-start.md + - Installation guide: deployment/ecosystem/integration-manager/installation.md + - Configuration: + - Configuration reference : deployment/ecosystem/integration-manager/configuration.md + - Proxy : deployment/ecosystem/integration-manager/proxy-configuration.md + - Private Registry : deployment/ecosystem/integration-manager/registry-authentification.md + - Troubleshooting: deployment/ecosystem/integration-manager/troubleshooting.md - Advanced: - Platform managers: deployment/managers.md - Breaking changes: