You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Oct 12, 2023. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+63-39Lines changed: 63 additions & 39 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,38 +6,64 @@
6
6
These [Helm](https://github.com/kubernetes/helm) charts bootstrap a production ready [Elastic Stack](https://www.elastic.co/products) service on a Kubernetes cluster managed by [Azure Container Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes) and other Azure services.
7
7
8
8
The following features are included:
9
+
9
10
* Deployment for [Elasticsearch](https://www.elastic.co/products/elasticsearch), [Kibana](https://www.elastic.co/products/kibana) and [Logstash](https://www.elastic.co/products/logstash) services
10
11
* Deployment script which retrieves the secrets and certificates from [Azure Key Vault](https://azure.microsoft.com/en-us/services/key-vault/) and injects them into the Helm charts
11
12
* TLS termination and load balancing for Kibana using [NGINX Ingress Controller](https://github.com/kubernetes/ingress-nginx)
12
13
*[Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-authentication-scenarios) authentication for Kibana
13
-
* Integration with [Azure Redis Cache](https://azure.microsoft.com/en-us/services/cache/)which acts as middleware for log events between the Log Appenders and Logstash
14
+
* Integration with [Azure Redis Cache](https://azure.microsoft.com/en-us/services/cache/)acting as middleware for log events between the Log Appenders and Logstash
14
15
* TLS connection between Logstash and Redis Cache handled by [stunnel](https://www.stunnel.org/)
15
-
* Support for [Multiple Data Pipelines](https://www.elastic.co/blog/logstash-multiple-pipelines) in Logstash which allows multiple Redis Caches as input (e.g one Redis cluster per environment)
16
+
* Support for [Multiple Data Pipelines](https://www.elastic.co/blog/logstash-multiple-pipelines) in Logstash allowing multiple Redis Caches as input (e.g one Redis cluster per environment)
16
17
* Installation of a [Curator](https://github.com/elastic/curator) cron job that cleans up daily all indexes which are older than 30 days
17
18
* Installation of [Elasticsearch Index Templates](https://www.elastic.co/guide/en/elasticsearch/reference/5.6/indices-templates.html) as a pre-deployment step
18
19
* Installation of [Elasticsearch Watches](https://www.elastic.co/guide/en/elasticsearch/reference/5.6/watcher-api.html) as a post deployment step. The watches can be used for alerts and notifications over Microsoft Teams/Slack webhook or email
19
20
* Installation of [Elasticsearch x-pack license](https://license.elastic.co/download) as a post deployment step
20
21
21
-
# Architecture
22
+
<!-- TOC -->
23
+
24
+
-[Introduction](#introduction)
25
+
-[Architecture](#architecture)
26
+
-[Azure Resources](#azure-resources)
27
+
-[Azure Key Vault](#azure-key-vault)
28
+
-[Public Static IP and DNS Domain](#public-static-ip-and-dns-domain)
29
+
-[Redis Cache](#redis-cache)
30
+
-[Application for Azure Active Directory](#application-for-azure-active-directory)
A few Azure resources need to be provisioned before proceeding with the Helm charts installation.
28
54
29
-
## Azure Key Vault
55
+
###Azure Key Vault
30
56
31
57
All secrets and certificates used by the charts are stored in an Azure Key Vault. The deployment script is able to fetch them and to inject them further into the charts.
32
58
33
59
You can create a new Key Vault with default permissions:
34
60
35
61
```console
36
-
37
62
az keyvault create --name <KEYVAULT_NAME> --resource-group <RESOURCE_GROUP>
38
63
```
39
64
40
65
It is recommended that you use two different principals to operate the Key Vault:
66
+
41
67
* A _Security Operator_ who has read/write access to secrets, keys and certificates. This principal should be only used for setting up the Key Vault or rotate the secrets.
42
68
* A _Deployment Operator_ who is only able to read secrets. This principal should be used to perform the deployment.
az keyvault set-policy --upn <DEPLOYMENT_OPERATOR_USER_PRINCIPAL> --name <KEYVAULT_NAME> --resource-group <RESOURCE_GROP> --secret-permissions get list
50
76
```
51
77
52
-
## Public Static IP and DNS Domain
78
+
###Public Static IP and DNS Domain
53
79
54
80
You can allocate a public static IP in Azure. This IP will be used to expose Kibana to the world.
55
81
@@ -78,9 +104,9 @@ The private key password must be also stored in a different secret, such that it
78
104
az keyvault secret set --name kibana-certificate-key-password --vault-name <KEYVAULT_NAME> --value <PASSWORD>
79
105
```
80
106
81
-
## Redis Cache
107
+
###Redis Cache
82
108
83
-
The Azure Redis Cache is used as a middleware between the Log Appenders and Logstash service. This is quite scalable and it also decouples the Log Appenders from Elastic Stack service. You can use any Log Appender which is able to write log events into Redis.
109
+
The Azure Redis Cache is used as a middleware between the Log Appenders and Logstash service. This is quite scalable and it also decouples the Log Appenders from Elastic Stack service. You can use any Log Appender which is able to write log events into Redis.
84
110
85
111
```console
86
112
az redis create --name dev-logscache --location <LOCATION> --resrouce-group <RESOURCE_GROUP> --sku Standard --vm-size C1
@@ -93,7 +119,7 @@ You have to store one of the Redis Keys in Key Vault.
93
119
az keyvault secret set --vault-name <KEYVAULT_NAME> --name logstash-dev-redis-key --value=<REDIS_KEY>
94
120
```
95
121
96
-
## Application for Azure Active Directory
122
+
###Application for Azure Active Directory
97
123
98
124
An Azure Active Directory application of type _Web app/API_ is required in order to use the AAD as an identity provider for Kibana. The authentication is provided by [oauth2_proxy](https://github.com/bitly/oauth2_proxy) reverse proxy which is deployed in the same POD as Kibana.
99
125
@@ -119,8 +145,7 @@ az keyvault secret set --name kibana-oauth-cookie-secret --vault-name <KEYVAULT
119
145
120
146
You should also update the access list with the emails of the users from your organization which are allowed to access Kibana. The white list is in [oauth2-proxy-config-secret.yaml](charts/kibana-logstash/templates/secrets/oauth2-proxy-config-secret.yaml) file.
121
147
122
-
## Microsoft Teams/Slack incoming Webhook
123
-
148
+
### Microsoft Teams/Slack incoming Webhook
124
149
125
150
The [Elasticsearch Watcher](https://www.elastic.co/guide/en/elasticsearch/reference/master/watcher-api.html) can post notifications into a webhook. For example, you can use a Microsoft Teams webhook, which can be created following these [instructions](https://docs.microsoft.com/en-us/microsoftteams/platform/concepts/connectors).
126
151
@@ -132,9 +157,9 @@ az keyvault secret set --vault-name <KEYVAULT_NAME> -n elasticsearch-watcher-web
132
157
133
158
If you want instead to use a [Slack Incoming Webhook](https://api.slack.com/incoming-webhooks), you can adjust the configuration in the [post-install-watches-secret.yaml](charts/kibana-logstash/templates/post-install-watches-secret.yaml) file.
134
159
135
-
# Customize Logstash Configuration
160
+
##Customize Logstash Configuration
136
161
137
-
## Multiple Data Pipelines
162
+
###Multiple Data Pipelines
138
163
139
164
Multiple data pipelines can be defined in the [values.yaml](charts/kibana-logstash/environments/acs/values.yaml) file by creating multiple `stunnel` connections as follows:
140
165
@@ -159,77 +184,77 @@ stunnel:
159
184
port: 6378
160
185
```
161
186
162
-
## Indexes Clean Up
187
+
### Indexes Clean Up
163
188
164
-
The old indexes are cleaned up by the [Curator](https://github.com/elastic/curator) tool which is executed daily by a cron job. Its configuration is available in [curator-actions.yaml](charts/kibana-logstash/templates/config/curator-actions.yaml) file. You should adjust it according with your needs.
189
+
The old indexes are cleaned up by the [Curator](https://github.com/elastic/curator) tool which is executed daily by a cron job. Its configuration is available in [curator-actions.yaml](charts/kibana-logstash/templates/config/curator-actions.yaml) file. You should adjust it according to your needs.
165
190
166
-
## Index Templates
191
+
### Index Templates
167
192
168
193
The [Elasticsearch Index Templates](https://www.elastic.co/guide/en/elasticsearch/reference/master/indices-templates.html) are installed automatically by a pre-install job. They are defined in the [pre-install-templates-config.yaml](charts/kibana-logstash/templates/pre-install-templates-config.yaml) file.
169
194
170
-
## Index Watches
171
-
The [Elasticsearch Watches](https://www.elastic.co/guide/en/elasticsearch/reference/master/watcher-api.html) are also installed automatically by a post-install job. They can be used to trigger any alert or notification based on search queries. The watches configuration is available in [post-install-watches-secret.yaml](charts/kibana-logstash/templates/post-install-watches-secret.yaml) file. You should update this configuration according with you needs.
195
+
### Index Watches
172
196
173
-
## Elasticsearch License
197
+
The [Elasticsearch Watches](https://www.elastic.co/guide/en/elasticsearch/reference/master/watcher-api.html) are also installed automatically by a post-install job. They can be used to trigger any alert or notification based on search queries. The watches configuration is available in [post-install-watches-secret.yaml](charts/kibana-logstash/templates/post-install-watches-secret.yaml) file. You should update this configuration according to your needs.
174
198
199
+
### Elasticsearch License
175
200
176
201
In case you have an [Elasticsearch x-pack license](https://license.elastic.co/download), you can install it when [elasticsearch chart](charts/elasticsearch/README.md) is deployed.
177
202
178
-
# Installation
203
+
## Installation
179
204
180
-
## NGINX Ingress Controller
205
+
### NGINX Ingress Controller
181
206
182
-
The `nginx-ingress` will act as a frontend load balancer and it will provide TLS termination for Kibana public endpoint. You can get the latest version from [kubernetes/charts/stable/nginx-ingress](https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress). Before starting the installation, you have to update a few Helm values from `values.yaml` file.
207
+
The `nginx-ingress` will act as a frontend load balancer and it will provide TLS termination for the Kibana public endpoint. Get the latest version from [kubernetes/charts/stable/nginx-ingress](https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress). Before starting the installation, updating e a few Helm values from `values.yaml` file is necessary.
183
208
184
-
You should enable the Kubernetes RBAC by setting:
209
+
Enable the Kubernetes RBAC by setting:
185
210
186
211
```console
187
212
rbac.create=true
188
213
```
189
214
190
-
And set your static public IP allocated in Azure, as a load balancer frontend IP:
215
+
And set the static public IP allocated in Azure, as a load balancer frontend IP:
191
216
192
217
```console
193
218
controller.service.loadBalancerIP: "<YOUR PUBLIC IP>"
194
219
```
195
220
196
-
You can install now the helm package with the following commands:
221
+
Install now the helm package with the following commands:
197
222
198
223
```console
199
224
cd charts/stable/nginx-ingress
200
225
helm install -f values.yaml -n nginx-ingress .
201
226
```
202
227
203
-
After the installation is done, you should verify that your public IP is properly assigned to the controller.
228
+
After the installation is done, verify that the public IP is properly assigned to the controller.
204
229
205
230
```console
206
231
$> kubectl get svc nginx-ingress-nginx-ingress-controller
Kibana requires an Elasticsearch cluster which can be installed using the [elasticsearch chart](charts/elasticsearch/README.md). You can create a deployment using the `deploy.sh` script available in the chart. Check the [README](charts/elasticsearch/README.md) file for more details:
239
+
Kibana requires an Elasticsearch cluster which can be installed using the [elasticsearch chart](charts/elasticsearch/README.md). Create a deployment using the `deploy.sh` script available in the chart. Check the [README](charts/elasticsearch/README.md) file for more details:
215
240
216
241
```console
217
242
./deploy.sh -e acs -n elk
218
243
```
219
244
220
245
The command will install an Elasticsearch cluster in the `elk` namespace using the `acs` environment variables.
221
246
222
-
## Kibana and Logstash
247
+
### Kibana and Logstash
223
248
224
249
You can install now the [kibana-logstash](charts/kibana-logstash) chart using the `deploy.sh` script available in the chart. Check the [README](charts/kibana-logstash/README.md) file for more details.
225
250
226
251
```console
227
252
./deploy.sh -n elk -d <DOMAIN> -v <KEYVAULT_NAME>
228
-
229
253
```
230
-
> Note that you have to replace the `DOMAIN` with your Kibana DNS domain and the `KEYVAULT_NAME` with your Azure Key Vault name.
231
254
232
-
This command will install Kibana and Logstash in the `elk` namespace using the `acs` environment variables. If everything works well, you should see the following output:
255
+
> Note to replace the `DOMAIN` with the Kibana DNS domain and the `KEYVAULT_NAME` with the Azure Key Vault name.
256
+
257
+
This command installs Kibana and Logstash in the `elk` namespace using the `acs` environment variables. If everything works well, the following output should be shown:
You can upgrade the charts after the initial installation whenever you have a change, by simply executing again the deployment scripts with the same arguments. Helm will create a new release for you.
279
304
280
-
281
-
# Contributing
305
+
## Contributing
282
306
283
307
This project welcomes contributions and suggestions. Most contributions require you to agree to a
284
308
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
285
-
the rights to use your contribution. For details, visit https://cla.microsoft.com.
309
+
the rights to use your contribution. For details, visit [https://cla.microsoft.com](https://cla.microsoft.com).
286
310
287
311
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
288
312
a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
Copy file name to clipboardExpand all lines: charts/elasticsearch/README.md
+5-2Lines changed: 5 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,11 +1,14 @@
1
+
# Elasticsearch helm chart
2
+
1
3
## Introduction
2
4
3
5
This chart bootstraps an [Elasticsearch cluster](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager.
4
6
5
7
It is based on [clockworksoul/helm-elasticsearch](https://github.com/clockworksoul/helm-elasticsearch) chart.
6
8
7
9
## Prerequisites
8
-
- Kubernetes 1.8+ e.g. deployed with [Azure Container Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes)
10
+
11
+
- Kubernetes 1.8+ e.g. deployed with [Azure Container Service (AKS)](https://docs.microsoft.com/en-us/azure/aks/intro-kubernetes)
9
12
10
13
## Configuration
11
14
@@ -14,7 +17,7 @@ The following table lists some of the configurable parameters of the `elasticsea
0 commit comments