A Kubernetes operator for edge devices
(This project also includes arhat's source, which is the agent for edge device to communicate with aranya)
- Deploy and manage edge devices with ease.
- Remove the boundry between
EdgeandCloud. - Integrate every device with container runtime into your
Kubernetescluster. - Help everyone to be able to share
Kubernetesclusters for edge devices. (see docs/Multi-tenancy.md)
Simplify Kubernetes
EXPERIMENTAL, USE AT YOUR OWN RISK
- Pod modeled container management in edge device
- Support
Podcreation withEnv,Volume- Sources: plain text,
Secret,ConfigMap
- Sources: plain text,
- Support
- Remote device management with
kubectl(both container and host)logexecattachport-forward
NOTE: For details of the host management, please refer to Maintenance #Host Management
Kubernetescluster network not working for edge devices, see Roadmap #Networking
see docs/Build.md
Kubernetescluster withRBACenabled- Minimum cluster requirements: 1 master (must have) with 1 node (to deploy
aranya)
- Minimum cluster requirements: 1 master (must have) with 1 node (to deploy
-
Deploy
aranyato yourKubernetescluster for evaluation with following commands (see docs/Maintenance.md for more deployment tips)# set the namespace for edge devices, aranya will be deployed to this namespace $ export NS=edge # create the namespace $ kubectl create namespace ${NS} # create custom resource definitions (CRDs) used by aranya $ kubectl apply -f https://raw.githubusercontent.com/arhat-dev/aranya/master/cicd/k8s/crds/aranya_v1alpha1_edgedevice_crd.yaml # create service account for aranya (we will bind both cluster role and namespace role to it) $ kubectl -n ${NS} create serviceaccount aranya # create cluster role and namespace role for aranya $ kubectl apply -f https://raw.githubusercontent.com/arhat-dev/aranya/master/cicd/k8s/aranya-roles.yaml # config role bindings for aranya $ kubectl -n ${NS} create rolebinding aranya --role=aranya --serviceaccount=${NS}:aranya $ kubectl create clusterrolebinding aranya --clusterrole=aranya --serviceaccount=${NS}:aranya # deploy aranya to your cluster $ kubectl -n ${NS} apply -f https://raw.githubusercontent.com/arhat-dev/aranya/master/cicd/k8s/aranya-deploy.yaml
-
Create
EdgeDeviceresource objects for each one of your edge devices (see sample-edge-devices.yaml for example)aranyawill create a node object with the same name for everyEdgeDevicein your cluster- Configure the connectivity between
aranyaand your edge devices, depending on the connectivity method set in the spec (spec.connectivity.method):grpc- A gRPC server will be created and served by
aranyaaccording to thespec.connectivity.grpcConfig,aranyaalso maintains an according service object for that server. - If you want to access the newly created gRPC service for your edge device outside the cluster, you need to setup
Kubernetes Ingressusing applications likeingress-nginx,traefiketc. at first. Then you need to create anIngressobject (see sample-ingress-traefik.yaml for example) for the gRPC service. - Configure your edge device's
arhatto connect the gRPC server accoding to yourIngress's host
- A gRPC server will be created and served by
mqtt(WIP, see Roadmap #Connectivity)aranyawill try to talk to your mqtt broker accoding to thespec.connectivity.mqttConfig.- You need to configure your edge device's
arhatto talk to the same mqtt broker or one broker in the same mqtt broker cluster depending on your own usecase, the config optionmessageNamespacemust match to getarhatable to communicate witharanya.
- Deploy
arhatwith configuration to your edge devices, start and wait to get connected toaranya- You can get
arahtby downloading from latest releases or build you own easily (see docs/Build.md). - For configuration references, please refer to config/arhat for configuration samples.
- Run
/path/to/arhat -c /path/to/arhat-config.yaml
- You can get
-
aranyawill create a virtual pod with the name of theEdgeDevicein the same namespace,kuebctl log/exec/attach/port-frowardto thevirtual podwill work in edge device host if allowed. (see design reasons at Maintenance #Host Management) -
Create workloads with tolerations (taints for edge devices) and use label selectors or node affinity to assign to specific edge devices (see sample-workload.yaml for example)
-
Common Node Taints
Taint Key Value arhat.dev/namespaceName of the namespace the edge device deployed to -
Common Node Labels
Label Name Value arhat.dev/roleEdgeDevicearhat.dev/nameThe edge device name
-
Every EdgeDevice object needs to setup a kubelet server to serve kubectl commands which could execute into certain pods, thus we need to provision node certifications for each one of EdgeDevices' virtual node in cluster, which would take a lot of time for lage scale deployment. The performance test was taken on my own Kubernetes cluster described in my homelab after all the required node certifications has been provisioned.
-
Test Workload
- 1000 EdgeDevice using
gRPC(generated with./scripts/gen-deploy-script.sh 1000)- each requires a
gRPCandkubeletserver - each requires a
NodeandServiceobject
- each requires a
- 1000 EdgeDevice using
-
Resuts
--- Deployment Speed: ~ 5 devices/s Memory Usage: ~ 280 MB CPU Usage: ~ 3 GHz --- Delete Speed: ~ 6 devices/s
However, after 1000 devices and node objects deployed and serving, my cluster shuts me out due to the kube-apiserver unable to handle more requests, but it's farely good result for my 4 core virtual machine serving both etcd and kube-apiserver.
see ROADMAP.md
- Why not
k3s?k3sis really awesome for some kind of edge devices, but still requires lots of work to be done to serve all kinds of edge devices right. One of the most significant problems is the splited networks with NAT or Firewall (such as homelab), and we don't think problems like that should be resolved ink3sproject which would totally change the wayk3sworks.
- Why not using
virtual-kubelet?virtual-kubeletis designed for cloud providers such asAzure,GCP,AWSto run containers at network edge. However, most edge device users aren't able to or don't want to setup such kind of network infrastructure.- A
virtual-kubeletis deployed as a pod on behalf of a contaienr runtime, if this model is applied for edge devices, then large scale edge device cluster would claim a lot of pod resource, which requires a lot of node to serve, it's inefficient.
- Why
arhatandaranya(why notkubelet)?kubeletis heavily dependent on http, maybe it's not a good idea for edge devices with poor network to communicate with each other via http.aranyais the watcher part inkubelet, lots ofkubelet's work such as cluster resource fetch and update is done byaranya,aranyaresolves everything in cluster forarhatbefore any command was delivered toarhat.arhatis the worker part inkubelet, it's an event driven agent and only tend to command execution.- Due to the design decisions above, we only need to transfer necessary messages between
aranyaandarhatsuch as pod creation command, container status update, node status update etc. Keeping the edge device's data usage for management as low as possible.
Kubernetes- Really eased my life with my homelab.
virtual-kubelet- This project was inspired by its idea, which introduced an cloud agent to run containers in network edge.
Buddhism- Which is the origin of the name
aranyaandarhat.
- Which is the origin of the name
- Jeffrey Stoke
- I'm seeking for career opportunities (associate to junior level) in Deutschland
Copyright 2019 The arhat.dev Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.