Armory Agent for Kubernetes Installation
Early Access
The information below is written for an Early Access feature. Contact us if you are interested in using this feature! Your feedback will help shape the development of this feature.Do not use Early Access features in a production instance of Armory Enterprise.
This installation guide is designed for installing the Agent in a test environment. It does not include mTLS configuration, so the Agent service and plugin do not communicate securely.
Before you begin
- You deployed Armory Enterprise using the Armory Operator and Kustomize patches.
- You have configured Clouddriver to use MySQL or PostgreSQL. See the Configure Clouddriver to use a SQL Database guide for instructions. The Agent plugin uses the SQL database to store cache data.
- You have a running Redis instance. The Agent plugin uses Redis to coordinate between Clouddriver replicas. Note: you need Redis even if you only have one Clouddriver instance.
- You have read the Armory Agent overview.
- If you are running multiple Clouddriver instances, you have a running Redis instance. The Agent uses Redis to coordinate between Clouddriver replicas.
- You have an additional Kubernetes cluster to serve as your deployment target cluster.
Networking requirements
Communication from the Agent service to the Clouddriver plugin occurs over gRPC port 9091. Communication between the service and the plugin must be http/2
. http/1.1
is not compatible and causes communication issues between the Agent service and Clouddriver plugin.
Compatibility matrix
The Armory Agent is in early access. For more information about using this feature, contact us.
Armory Enterprise (Spinnaker) Version | Armory Agent Plugin Version | Armory Agent Version |
---|---|---|
2.24.x (1.24.x) | 0.7.23 | 0.5.24 |
2.25.x (1.25.x) | 0.8.22 | 0.5.24 |
2.26.x (1.26.x) | 0.9.14 | 0.5.24 |
Your Clouddriver service must use a MySQL-compatible database. See the Configure Clouddriver to use a SQL Database guide for instructions.
Database compatibility:
MySQL | PostgreSQL |
---|---|
5.7; AWS Aurora | 10+ |
Installation overview
In this guide, you deploy the Agent service to your target cluster.
Installation steps:
-
Install the Clouddriver plugin. You do this in the cluster where you are running Armory Enterprise.
- Create the plugin manifest as a Kustomize patch.
- Create a LoadBalancer service Kustomize patch to expose the plugin on gRPC port
9091
. - Apply the manifests.
-
Install the Agent service in the deployment target cluster.
- Create a namespace.
- Create Kubernetes accounts.
- Create a ConfigMap to configure the Agent service.
- Deploy the Agent service.
Install the Clouddriver plugin
Create the plugin manifest
Create a new armory-agent
directory in your Kustomize patches directory. Add the following agent-config.yaml
manifest to your new armory-agent
directory.
- Change the value for
name
if your Armory Enterprise service is called something other than “spinnaker”. - Update the
kubesvc-plugin
value to the Armory Agent Plugin Version that is compatible with your Armory Enterprise version. See the compatibility matrix.
apiVersion: spinnaker.armory.io/v1alpha2
kind: SpinnakerService
metadata:
name: spinnaker
spec:
spinnakerConfig:
profiles:
clouddriver:
spinnaker:
extensibility:
pluginsRootPath: /opt/clouddriver/lib/plugins
plugins:
Armory.Kubesvc:
enabled: true
# Plugin config
kubesvc:
cluster: redis
# eventsCleanupFrequencySeconds: 7200
# localShortCircuit: false
# runtime:
# defaults:
# onlySpinnakerManaged: true
# accounts:
# account1:
# customResources:
# - kubernetesKind: MyKind.mygroup.acme
# versioned: true
# deployPriority: "400"
kustomize:
clouddriver:
deployment:
patchesStrategicMerge:
- |
spec:
template:
spec:
initContainers:
- name: kubesvc-plugin
image: docker.io/armory/kubesvc-plugin:<version> # must be compatible with your Armory Enterprise version
volumeMounts:
- mountPath: /opt/plugin/target
name: kubesvc-plugin-vol
containers:
- name: clouddriver
volumeMounts:
- mountPath: /opt/clouddriver/lib/plugins
name: kubesvc-plugin-vol
volumes:
- name: kubesvc-plugin-vol
emptyDir: {}
Then include the file under the patchesStrategicMerge
section of your kustomization
file.
bases:
- agent-service
patchesStrategicMerge:
- armory-agent/agent-config.yaml
Expose Clouddriver as a LoadBalancer
To expose Clouddriver as a Kubernetes-type LoadBalancer, add the following manifest to your Kustomize directory. Then include the file in the resources
section of your kustomization
file.
Various cloud providers may require additional annotations for LoadBalancer. Consult your cloud provider’s documentation.
# This LoadBalancer service exposes the gRPC port on Clouddriver for the remote Agents to connect to
# Look for the LoadBalancer service IP address that is exposed on 9091
apiVersion: v1
kind: Service
metadata:
labels:
name: spin-agent-clouddriver
spec:
ports:
- name: grpc
port: 9091
protocol: TCP
targetPort: 9091
selector:
app: spin
cluster: spin-clouddriver
type: LoadBalancer
Apply the manifests
After you have configured both manifests, apply the updates.
Get the LoadBalancer IP address
Use kubectl get svc spin-agent-cloud-driver -n spinnaker
to make note of the LoadBalancer IP external address. You need this address when you configure the Agent.
Confirm Clouddriver is listening
Use netcat
to confirm Clouddriver is listening on port 9091 by executing nc -zv [LB address] 9091
. Perform this check from a node in your
Armory Enterprise cluster and one in your target cluster.
Install the Agent service
Create a namespace
In the deployment target cluster, execute kubectl create ns spin-agent
to create a namespace for the Agent service.
Configure permissions
Create a ClusterRole
, ServiceAccount
, and ClusterRoleBinding
for the Agent by applying the following manifest in your spin-agent
namespace:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: spin-cluster-role
rules:
- apiGroups:
- ""
resources:
- pods
- pods/log
- ingresses/status
- endpoints
verbs:
- get
- list
- update
- patch
- delete
- apiGroups:
- ""
resources:
- services
- services/finalizers
- events
- configmaps
- secrets
- namespaces
- ingresses
- jobs
verbs:
- create
- get
- list
- update
- watch
- patch
- delete
- apiGroups:
- batch
resources:
- jobs
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- apps
- extensions
resources:
- deployments
- deployments/finalizers
- deployments/scale
- daemonsets
- replicasets
- replicasets/finalizers
- replicasets/scale
- statefulsets
- statefulsets/finalizers
- statefulsets/scale
verbs:
- create
- get
- list
- update
- watch
- patch
- delete
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- get
- create
- apiGroups:
- spinnaker.armory.io
resources:
- '*'
- spinnakerservices
verbs:
- create
- get
- list
- update
- watch
- patch
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- '*'
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: spin-agent
name: spin-sa
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: spin-cluster-role-binding
subjects:
- kind: ServiceAccount
name: spin-sa
namespace: spin-agent
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: spin-cluster-role
Configure the Agent service
Configure the Agent service using a ConfigMap. Define kubesvc.yml
in the data
section:
apiVersion: v1
kind: ConfigMap
metadata:
name: kubesvc-config
namespace: spin-agent
data:
kubesvc.yml: |
server:
port: 8082
Clouddriver plugin LoadBalancer
Replace [LoadBalancer Exposed Address] with the IP address you obtained in the Get the LoadBalancer IP address section.
apiVersion: v1
kind: ConfigMap
metadata:
name: kubesvc-config
namespace: spin-agent
data:
kubesvc.yaml: |
clouddriver:
grpc: [LoadBalancer Exposed Address]:9091
insecure: true
Kubernetes account
Add your Kubernetes account configuration for your cluster:
apiVersion: v1
kind: ConfigMap
metadata:
name: kubesvc-config
namespace: spin-agent
data:
kubesvc.yaml: |
clouddriver:
grpc: [LoadBalancer Exposed Address]:9091
insecure: true
kubernetes:
accounts:
- name:
kubeconfigFile:
insecure:
context:
oAuthScopes:
serviceAccount: true
serviceAccountName: spin-sa
namespaces: []
omitNamespaces: []
onlyNamespacedResources:
kinds: []
omitKinds: []
customResourceDefinitions: [{kind:}]
metrics:
permissions: []
maxResumableResourceAgeMs:
onlySpinnakerManaged:
noProxy:
See the Agent options for field explanations.
Apply the manifest to your spin-agent
namespace.
Deploy the Agent service
Apply the following Agent deployment manifest in your spin-agent
namespace:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: spin
app.kubernetes.io/name: kubesvc
app.kubernetes.io/part-of: spinnaker
cluster: spin-kubesvc
name: spin-kubesvc
spec:
replicas: 1
selector:
matchLabels:
app: spin
cluster: spin-kubesvc
template:
metadata:
labels:
app: spin
app.kubernetes.io/name: kubesvc
app.kubernetes.io/part-of: spinnaker
cluster: spin-kubesvc
spec:
serviceAccount: spin-sa
containers:
- image: armory/kubesvc:<version> # must be compatible with your Armory Enterprise version
imagePullPolicy: IfNotPresent
name: kubesvc
ports:
- name: health
containerPort: 8082
protocol: TCP
- name: metrics
containerPort: 8008
protocol: TCP
readinessProbe:
httpGet:
port: health
path: /health
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /opt/spinnaker/config
name: volume-kubesvc-config
# - mountPath: /kubeconfigfiles
# name: volume-kubesvc-kubeconfigs
restartPolicy: Always
volumes:
- name: volume-kubesvc-config
configMap:
name: kubesvc-config
# - name: volume-kubesvc-kubeconfigs
# secret:
# defaultMode: 420
# secretName: kubeconfigs-secret
Confirm success
Create a pipeline with a Deploy manifest
stage. You should see your target cluster available in the Accounts
list. Deploy a static manifest.
What’s next
- Troubleshoot the Armory Agent Service and Plugin page if you run into issues.
- Learn how to Monitor the Armory Agent with Prometheus. Agent CPU usage is low, but the amount of memory depends on the size of the cluster the Agent is monitoring. The gRPC buffer consumes about 4MB of memory.
- Configure Mutual TLS Authentication
- Read about Kubernetes Permissions for the Armory Agent
Feedback
Was this page helpful?
Thank you for letting us know!
Sorry to hear that. Please tell us how we can improve.
Last modified August 4, 2021: (b7e6c74)