f3s: Kubernetes with FreeBSD - Part X: GitOps with ArgoCD
DRAFT - Not yet published
This is part X of the f3s series for my self-hosting demands in a home lab. f3s? The "f" stands for FreeBSD, and the "3s" stands for k3s, the Kubernetes distribution I use on FreeBSD-based physical machines.
2024-11-17 f3s: Kubernetes with FreeBSD - Part 1: Setting the stage
2024-12-03 f3s: Kubernetes with FreeBSD - Part 2: Hardware and base installation
2025-02-01 f3s: Kubernetes with FreeBSD - Part 3: Protecting from power cuts
2025-04-05 f3s: Kubernetes with FreeBSD - Part 4: Rocky Linux Bhyve VMs
2025-05-11 f3s: Kubernetes with FreeBSD - Part 5: WireGuard mesh network
2025-07-14 f3s: Kubernetes with FreeBSD - Part 6: Storage
2025-10-02 f3s: Kubernetes with FreeBSD - Part 7: k3s and first pod deployments
2025-12-07 f3s: Kubernetes with FreeBSD - Part 8: Observability

Table of Contents
Introduction
In the previous posts, I deployed applications to the k3s cluster using Helm charts and Justfiles—running just install or just upgrade to imperatively push changes to the cluster. While this approach works, it has several drawbacks:
- No single source of truth: The cluster state depends on which commands were run and when
- Manual synchronization: Every change requires manually running commands
- Drift detection is hard: No easy way to know if cluster state matches the desired configuration
- Rollback complexity: Rolling back changes means re-running old Helm commands
- No audit trail: Hard to track who changed what and when
This blog post covers the migration from imperative Helm deployments to declarative GitOps using ArgoCD. After this migration, the Git repository becomes the single source of truth, and ArgoCD automatically ensures the cluster matches what's defined in Git.
What is GitOps?
GitOps is an operational framework that applies DevOps best practices—like version control, collaboration, and CI/CD—to infrastructure automation. The core idea is simple: the entire desired state of your infrastructure is stored in Git, and automated processes ensure the actual state matches the desired state.
Key principles:
- Declarative: The system's desired state is described declaratively (YAML manifests, Helm values)
- Versioned and immutable: All changes are committed to Git, providing a complete history
- Pulled automatically: An agent in the cluster continuously pulls the desired state from Git
- Continuously reconciled: The agent ensures the actual state matches the desired state, automatically correcting drift
For Kubernetes, this means:
- 1. All manifests, Helm charts, and configuration live in a Git repository
- 2. A tool (ArgoCD in our case) watches the repository
- 3. When changes are pushed to Git, ArgoCD automatically applies them to the cluster
- 4. If someone manually changes resources in the cluster, ArgoCD detects the drift and can automatically revert it
What is ArgoCD?
ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. It's implemented as a Kubernetes controller that continuously monitors running applications and compares the current, live state against the desired target state defined in Git.
ArgoCD Documentation
Key features:
- Automated deployment: Monitors Git repositories and automatically syncs changes to the cluster
- Application definitions: Defines applications as CRDs (Custom Resource Definitions)
- Health assessment: Understands Kubernetes resources and can determine if an application is healthy
- Web UI and CLI: Provides both a web interface and command-line tool for managing applications
- RBAC: Role-based access control for team collaboration
- SSO integration: Can integrate with existing authentication systems
- Multi-cluster support: Can manage applications across multiple Kubernetes clusters
- Sync waves and hooks: Control the order of resource deployment and run jobs at specific lifecycle points
Why ArgoCD for f3s?
For a home lab cluster, ArgoCD provides several benefits:
Disaster recovery: If the entire cluster is lost, I can rebuild it by:
- 1. Bootstrapping a new k3s cluster
- 2. Installing ArgoCD
- 3. Pointing ArgoCD at the Git repository
- 4. All applications automatically deploy to the desired state
Experimentation safety: I can test changes in a separate Git branch without affecting the running cluster. Once validated, merge to master and ArgoCD applies the changes.
Drift detection: If I manually change something in the cluster (for debugging), ArgoCD shows the difference and can automatically revert it.
Declarative configuration: The Git repository documents the entire cluster configuration. No need to remember which just commands to run or in which order.
Automatic sync: Push to Git, and changes deploy automatically. No need to SSH to a workstation and run Helm commands.
Deploying ArgoCD
ArgoCD itself runs as a set of Kubernetes resources in the cluster. The official installation method uses kubectl apply, which is fitting—ArgoCD manages everything else via GitOps, but ArgoCD itself needs a bootstrap.
Prerequisites
Create the cicd namespace where ArgoCD will run:
$ kubectl create namespace cicd
namespace/cicd created
Installing ArgoCD
The ArgoCD installation lives in the configuration repository:
codeberg.org/snonux/conf/f3s/argocd
I deployed ArgoCD using Helm instead of the raw manifests. This provides easier upgrades and customization. The installation is managed via a Justfile:
$ cd conf/f3s/argocd
$ just install
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm install argocd argo/argo-cd \
--namespace cicd \
--version 7.7.12 \
-f values.yaml
NAME: argocd
LAST DEPLOYED: ...
NAMESPACE: cicd
STATUS: deployed
The values.yaml file configures several important aspects:
Persistent storage for the repo-server: ArgoCD clones Git repositories to cache them locally. I configured a persistent volume so the cache survives pod restarts:
repoServer:
volumes:
- name: repo-cache
persistentVolumeClaim:
claimName: argocd-repo-cache-pvc
volumeMounts:
- name: repo-cache
mountPath: /tmp
Admin password preservation: By default, the admin password is auto-generated and stored in a secret. To ensure it persists across Helm upgrades:
configs:
secret:
createSecret: false
I manually created the secret before installation:
$ ARGOCD_ADMIN_PASSWORD=$(pwgen -s 32 1)
$ BCRYPT_HASH=$(htpasswd -nbBC 10 "" "$ARGOCD_ADMIN_PASSWORD" | tr -d ':\n' | sed 's/$2y/$2a/')
$ kubectl create secret generic argocd-secret \
--from-literal=admin.password="$BCRYPT_HASH" \
-n cicd
$ echo "ArgoCD admin password: $ARGOCD_ADMIN_PASSWORD"
Server configuration: Enabled insecure mode since TLS is handled by the OpenBSD edge relays:
server:
insecure: true
Accessing ArgoCD
After deployment, ArgoCD runs several pods in the cicd namespace:
$ kubectl get pods -n cicd
NAME READY STATUS RESTARTS AGE
argocd-application-controller-0 1/1 Running 0 45d
argocd-applicationset-controller-66d6b9b8f4-vhm9k 1/1 Running 0 45d
argocd-dex-server-7fb556b7dd-xjr2l 1/1 Running 0 45d
argocd-notifications-controller-6d8dd4c5f5-b8vwl 1/1 Running 0 45d
argocd-redis-77b8d6c6d4-mz9hg 1/1 Running 0 45d
argocd-repo-server-5f98f77b97-8xtcq 1/1 Running 0 45d
argocd-server-6b9c4b4f8d-kxw7p 1/1 Running 0 45d
I created an ingress to expose the ArgoCD web UI:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: argocd-server-ingress
namespace: cicd
annotations:
spec.ingressClassName: traefik
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: argocd.f3s.foo.zone
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: argocd-server
port:
number: 80
Following the same pattern as other services, the OpenBSD edge relays terminate TLS and forward traffic through WireGuard to the cluster. ArgoCD is now accessible at:
The ArgoCD CLI can also be used for operations:
$ argocd login argocd.f3s.foo.zone
$ argocd app list
ArgoCD Application Structure
ArgoCD uses a CRD called Application to define what should be deployed. Each application specifies:
- Source: Where the manifests live (Git repo, Helm chart repository, or both)
- Destination: Which cluster and namespace to deploy to
Here's a simple example for the miniflux application:
ind: Application
metadata:
name: miniflux
namespace: cicd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://codeberg.org/snonux/conf.git
targetRevision: master
path: f3s/miniflux/helm-chart
destination:
server: https://kubernetes.default.svc
namespace: services
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=false
retry:
limit: 3
backoff:
duration: 5s
factor: 2
maxDuration: 1m
Key fields:
- source.path: Points to the Helm chart directory in Git
- destination.namespace: Where to deploy the application
- syncPolicy.automated.prune: Delete resources that are removed from Git
- syncPolicy.automated.selfHeal: Automatically revert manual changes in the cluster
- finalizers: Ensures ArgoCD deletes all resources when the Application is deleted
Repository Organization
I reorganized the configuration repository to support GitOps:
/home/paul/git/conf/f3s/
├── argocd-apps/ # ArgoCD Application manifests (organized by namespace)
│ ├── README.md # Documentation of structure
│ ├── monitoring/ # Observability stack (6 apps)
│ │ ├── alloy.yaml
│ │ ├── grafana-ingress.yaml
│ │ ├── loki.yaml
│ │ ├── prometheus.yaml
│ │ ├── pushgateway.yaml
│ │ └── tempo.yaml
│ ├── services/ # User-facing applications (13 apps)
│ │ ├── anki-sync-server.yaml
│ │ ├── audiobookshelf.yaml
│ │ ├── filebrowser.yaml
│ │ ├── immich.yaml
│ │ ├── keybr.yaml
│ │ ├── kobo-sync-server.yaml
│ │ ├── miniflux.yaml
│ │ ├── opodsync.yaml
│ │ ├── radicale.yaml
│ │ ├── syncthing.yaml
│ │ ├── tracing-demo.yaml
│ │ ├── wallabag.yaml
│ │ └── webdav.yaml
│ ├── infra/ # Infrastructure services (1 app)
│ │ └── registry.yaml
│ └── test/ # Test/example applications (1 app)
│ └── example-apache-volume-claim.yaml
├── miniflux/ # Application directories (unchanged)
│ ├── helm-chart/
│ │ ├── Chart.yaml
│ │ ├── values.yaml
│ │ └── templates/
│ └── Justfile # Updated for ArgoCD
├── prometheus/
│ ├── manifests/ # NEW: Additional manifests
│ │ ├── persistent-volumes.yaml
│ │ ├── grafana-restart-hook.yaml
│ │ ├── freebsd-recording-rules.yaml
│ │ └── ...
│ └── Justfile # Updated for ArgoCD
└── ...
The application directories (miniflux, prometheus, etc.) remained mostly unchanged—ArgoCD references the same Helm charts. The main additions:
1. argocd-apps/: Application manifests organized by Kubernetes namespace for better clarity
- monitoring/: 6 observability applications
- services/: 13 user-facing applications
- infra/: 1 infrastructure application (registry)
- test/: 1 test application
2. */manifests/: Additional Kubernetes manifests for complex apps (like Prometheus)
3. Justfiles updated: Changed from helm install/upgrade to argocd app sync
This organization makes it easy to apply all applications in a specific namespace or manage them independently.
Migration Phases
These apps have straightforward Helm charts with no complex dependencies. Pattern established:
- 1. Create Application manifest in argocd-apps/
- 2. Apply with kubectl apply -f argocd-apps/<app>.yaml
- 3. Verify sync status: argocd app get <app>
- 4. Update Justfile to use ArgoCD commands
Phase 2: Infrastructure apps (3 apps)
- registry (Docker image registry)
- pushgateway (Prometheus metrics ingestion)
- immich (photo management with complex dependencies)
Phase 3: Monitoring stack (4 apps)
- tempo (distributed tracing)
- loki (log aggregation)
- alloy (log collection)
- prometheus (metrics and monitoring)
Phase 4: Monitoring addons (1 app)
- grafana-ingress (separate ingress for Grafana)
Example Migration: Miniflux
Let me walk through the migration of miniflux as a concrete example.
Before: Imperative Helm deployment
Original Justfile:
NAMESPACE := "services"
APP_NAME := "miniflux"
install:
kubectl apply -f helm-chart/persistent-volumes.yaml
helm install {{APP_NAME}} ./helm-chart --namespace {{NAMESPACE}}
upgrade:
helm upgrade {{APP_NAME}} ./helm-chart --namespace {{NAMESPACE}}
uninstall:
helm uninstall {{APP_NAME}} --namespace {{NAMESPACE}}
kubectl delete -f helm-chart/persistent-volumes.yaml
status:
@kubectl get all -n {{NAMESPACE}} -l app={{APP_NAME}}
Workflow:
1. Make changes to helm-chart/
2. Run just upgrade
3. Helm pushes changes to cluster
After: Declarative GitOps with ArgoCD
Created argocd-apps/services/miniflux.yaml:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: miniflux
namespace: cicd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
source:
repoURL: https://codeberg.org/snonux/conf.git
targetRevision: master
path: f3s/miniflux/helm-chart
destination:
server: https://kubernetes.default.svc
namespace: services
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=false
retry:
limit: 3
backoff:
duration: 5s
factor: 2
maxDuration: 1m
Updated Justfile:
NAMESPACE := "services"
APP_NAME := "miniflux"
status:
@echo "=== Pods ==="
@kubectl get pods -n {{NAMESPACE}} -l app={{APP_NAME}}
@echo ""
@echo "=== Services ==="
@kubectl get svc -n {{NAMESPACE}} -l app={{APP_NAME}}
@echo ""
@echo "=== ArgoCD Status ==="
@kubectl get application {{APP_NAME}} -n cicd -o jsonpath='Sync: {.status.sync.status}, Health: {.status.health.status}' 2>/dev/null && echo ""
sync:
@echo "Triggering ArgoCD sync..."
@kubectl annotate application {{APP_NAME}} -n cicd argocd.argoproj.io/refresh=normal --overwrite
@sleep 2
@kubectl get application {{APP_NAME}} -n cicd -o jsonpath='Sync: {.status.sync.status}, Health: {.status.health.status}' && echo ""
argocd-status:
argocd app get {{APP_NAME}} --core
logs:
kubectl logs -n {{NAMESPACE}} -l app={{APP_NAME}} --tail=100 -f
New workflow:
1. Make changes to helm-chart/
2. Commit and push to Git
3. ArgoCD automatically detects and syncs changes
4. (Optional) Run just sync to force immediate sync
Migration procedure
1. Backup current state:
$ helm get values miniflux -n services > /tmp/miniflux-backup-values.yaml
$ kubectl get all,ingress -n services -o yaml > /tmp/miniflux-backup.yaml
2. Create Application manifest:
$ kubectl apply -f argocd-apps/services/miniflux.yaml
application.argoproj.io/miniflux created
3. Verify ArgoCD adopted the resources:
$ argocd app get miniflux
Name: miniflux
Project: default
Server: https://kubernetes.default.svc
Namespace: services
URL: https://argocd.f3s.foo.zone/applications/miniflux
Repo: https://codeberg.org/snonux/conf.git
Target: master
Path: f3s/miniflux/helm-chart
SyncWindow: Sync Allowed
Sync Policy: Automated (Prune)
Sync Status: Synced to master (4e3c216)
Health Status: Healthy
4. Monitor for issues:
$ kubectl get pods -n services -l app=miniflux -w
NAME READY STATUS RESTARTS AGE
miniflux-postgres-556444cb8d-xvv2p 1/1 Running 0 54d
``
5. Test the application:
$ curl -I https://flux.f3s.foo.zone
HTTP/2 200
6. Update Justfile and commit changes
Total time: 10 minutes. Zero downtime.
## Complex Migration: Prometheus with Multi-Source
The Prometheus migration was more complex because it combines:
* Upstream Helm chart (kube-prometheus-stack)
* Custom manifests (PersistentVolumes, recording rules, dashboards)
* Sync hooks (PostSync job to restart Grafana)
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: prometheus
namespace: cicd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
project: default
sources:
# Source 1: Upstream Helm chart from prometheus-community
- repoURL: https://prometheus-community.github.io/helm-charts
chart: kube-prometheus-stack
targetRevision: 55.5.0
helm:
releaseName: prometheus
valuesObject:
# Full Prometheus configuration embedded here
kubeEtcd:
enabled: true
endpoints:
- 192.168.2.120
- 192.168.2.121
- 192.168.2.122
# ... (hundreds of lines of configuration)
# Source 2: Additional manifests from Git repository
- repoURL: https://codeberg.org/snonux/conf.git
targetRevision: master
path: f3s/prometheus/manifests
destination:
server: https://kubernetes.default.svc
namespace: monitoring
syncPolicy:
automated:
prune: false # Manual pruning for safety on complex stack
selfHeal: true
syncOptions:
- CreateNamespace=false
- ServerSideApply=true
retry:
limit: 3
backoff:
duration: 10s
factor: 2
maxDuration: 3m
The `prometheus/manifests/` directory contains:
f3s/prometheus/manifests/
├── persistent-volumes.yaml # Sync wave 0
├── additional-scrape-configs-secret.yaml # Sync wave 1
├── grafana-datasources-configmap.yaml # Sync wave 1
├── freebsd-recording-rules.yaml # Sync wave 3
├── openbsd-recording-rules.yaml # Sync wave 3
├── zfs-recording-rules.yaml # Sync wave 3
├── epimetheus-dashboard.yaml # Sync wave 4
├── zfs-dashboards.yaml # Sync wave 4
├── grafana-restart-hook.yaml # Sync wave 10 (PostSync)
└── grafana-restart-rbac.yaml # Sync wave 0
### Sync Waves and Hooks
ArgoCD allows controlling the order of resource deployment using sync waves (the `argocd.argoproj.io/sync-wave` annotation):
* Wave 0: Infrastructure (PersistentVolumes, RBAC)
* Wave 1: Configuration (Secrets, ConfigMaps)
* Wave 3: Recording rules (PrometheusRule CRDs)
* Wave 4: Dashboards (ConfigMaps with `grafana_dashboard: '1'` label)
* Wave 10: PostSync hooks (Jobs that run after everything else)
The Grafana restart hook ensures Grafana reloads datasources after they're updated:
apiVersion: batch/v1
kind: Job
metadata:
name: grafana-restart-hook
namespace: monitoring
annotations:
argocd.argoproj.io/hook: PostSync
*rgocd.argoproj.io/hook-delete-policy: BeforeHookCreation
*rgocd.argoproj.io/sync-wave: "10"
*
*plate:
spec:
serviceAccountName: grafana-restart-sa
restartPolicy: OnFailure
containers:
- name: kubectl
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- |
kubectl wait --for=condition=available --timeout=300s deployment/prometheus-grafana -n monitoring || true
kubectl delete pod -n monitoring -l app.kubernetes.io/name=grafana --ignore-not-found=true
backoffLimit: 2
This *he manual step in the old Justfile that required running `kubectl delete pod` after every upgrade.
## Migration *sults
After * all 21 applications to ArgoCD:
$ argocd app *st
NAME CLUSTER NAMESPACE PROJECT STATUS HEALTH SYNCPOLICY
alloy https://kubernetes.default.svc monitoring default Synced Healthy Auto-Prune
anki-sync-server https://kubernetes.default.svc services default Synced Healthy Auto-Prune
audiobookshelf https://kubernetes.default.svc services default Synced Healthy Auto-Prune
example-apache https://kubernetes.default.svc test default Synced Healthy Auto-Prune
example-apache-volume-... https://kubernetes.default.svc test default Synced Healthy Auto-Prune
filebrowser https://kubernetes.default.svc services default Synced Healthy Auto-Prune
freshrss https://kubernetes.default.svc services default Synced Healthy Auto-Prune
grafana-ingress https://kubernetes.default.svc monitoring default Synced Healthy Auto-Prune
immich https://kubernetes.default.svc services default Synced Healthy Auto-Prune
keybr https://kubernetes.default.svc services default Synced Healthy Auto-Prune
kobo-sync-server https://kubernetes.default.svc services default Synced Healthy Auto-Prune
loki https://kubernetes.default.svc monitoring default Synced Healthy Auto-Prune
miniflux https://kubernetes.default.svc services default Synced Healthy Auto-Prune
opodsync https://kubernetes.default.svc services default Synced Healthy Auto-Prune
prometheus https://kubernetes.default.svc monitoring default Synced Healthy Auto
pushgateway https://kubernetes.default.svc monitoring default Synced Healthy Auto-Prune
radicale https://kubernetes.default.svc services default Synced Healthy Auto-Prune
registry https://kubernetes.default.svc infra default Synced Healthy Auto-Prune
syncthing https://kubernetes.default.svc services default Synced Healthy Auto-Prune
tempo https://kubernetes.default.svc monitoring default Synced Healthy Auto-Prune
wallabag https://kubernetes.default.svc services default Synced Healthy Auto-Prune
webdav https://kubernetes.default.svc services default Synced Healthy Auto-Prune
All 21 applications: Synced and Healthy.
ArgoCD Web UI:
=> ./f3s-kubernetes-with-freebsd-part-X/argocd-apps-list.png ArgoCD Applications List
=> ./f3s-kubernetes-with-freebsd-part-X/argocd-app-tree.png ArgoCD Application Resource Tree
## Benefits Realized
### 1. Single Source of Truth
The Git repository at `https://codeberg.org/snonux/conf` now contains the complete cluster configuration. Anyone can clone it and see exactly what's deployed:
$ git clone https://codeberg.org/snonux/conf.git
$ cd conf/f3s
$ ls argocd-apps/
alloy.yaml anki-sync-server.yaml audiobookshelf.yaml ...
### 2. Automatic Synchronization
Push to Git, and changes deploy automatically:
$ cd conf/f3s/miniflux/helm-chart
$ vim values.yaml # Change replica count from 1 to 2
$ git add values.yaml
$ git commit -m "Scale miniflux to 2 replicas"
$ git push
ArgoCD detects change within 3 minutes and syncs automatically
No need to SSH to a workstation, pull the repo, and run `just upgrade`.
### 3. Drift Detection and Self-Healing
If someone manually changes a resource in the cluster, ArgoCD detects it:
$ kubectl scale deployment miniflux-server -n services --replicas=3
deployment.apps/miniflux-server scaled
ArgoCD detects drift within 3 minutes
$ argocd app get miniflux
...
Sync Status: OutOfSync from master (4e3c216)
With `selfHeal: true`, ArgoCD automatically reverts the change back to 2 replicas (the value in Git).
### 4. Easy Rollbacks
To rollback a change:
$ git revert HEAD
$ git push
ArgoCD automatically rolls back to the previous state
Or rollback to a specific commit:
$ argocd app rollback miniflux <revision-id>
### 5. Disaster Recovery
If the entire cluster is destroyed, recovery is straightforward:
1. Bootstrap a new k3s cluster
2. Create namespaces
3. Install ArgoCD
4. Apply all Application manifests:
$ kubectl apply -f argocd-apps/
5. ArgoCD deploys all 21 applications to their desired state
Total recovery time: ~30 minutes (mostly waiting for pods to pull images and start).
### 6. Documentation by Default
The Application manifests serve as documentation:
* Which Helm chart version is deployed? → Check `targetRevision`
* What custom values are configured? → Check `valuesObject`
* Which namespace does this deploy to? → Check `destination.namespace`
* Is auto-sync enabled? → Check `syncPolicy.automated`
No more guessing or checking `helm list` output.
### 7. Safe Experimentation
Create a feature branch, make changes, and preview them:
$ git checkout -b test-prometheus-upgrade
$ vim argocd-apps/prometheus.yaml # Bump chart version
$ git commit -am "Test Prometheus 56.0.0"
$ git push origin test-prometheus-upgrade
Temporarily point ArgoCD at the feature branch
$ kubectl patch application prometheus -n cicd \
--type merge \
-p '{"spec":{"source":{"targetRevision":"test-prometheus-upgrade"}}}'
Verify changes in ArgoCD Web UI
If good: merge to master
If bad: revert the patch
## Challenges and Solutions
### Challenge 1: Helm Release Adoption
When creating an Application for an existing Helm release, ArgoCD needs to "adopt" the resources. This failed initially with errors like:
The Helm operation failed with an error: release miniflux failed, and has been uninstalled due to atomic being set: timed out waiting for the condition
Solution: For existing Helm releases, I first ensured the Application manifest matched the current Helm values exactly. ArgoCD then recognized the resources were already in the desired state and adopted them without re-deploying.
### Challenge 2: Persistent Volumes Not Tracked by Helm
PersistentVolumes are cluster-scoped resources, not namespace-scoped. Many of my Helm charts created PVs using `kubectl apply -f persistent-volumes.yaml` outside of Helm.
Solution: For simple apps, I moved the PV definitions into the Helm chart templates. For complex apps (like Prometheus), I used the multi-source pattern with PVs in the `manifests/` directory with sync wave 0.
### Challenge 3: Secrets Management
ArgoCD stores Application manifests in Git, but secrets shouldn't be committed in plaintext.
Solution (current): Secrets are created manually with `kubectl create secret` and referenced by the Helm charts. The secrets themselves aren't managed by ArgoCD.
Future enhancement: Migrate to External Secrets Operator (ESO) to manage secrets declaratively while storing the actual secrets in a separate backend (Kubernetes secrets in a separate namespace, or eventually Vault).
### Challenge 4: Grafana Not Reloading Datasources
After updating the Grafana datasources ConfigMap, Grafana wouldn't detect the changes until pods were manually deleted.
Solution: Created a PostSync hook that automatically restarts Grafana pods after every ArgoCD sync. This runs as a Kubernetes Job in sync wave 10, ensuring it executes after all other resources are deployed.
### Challenge 5: Prometheus With Multiple Sources
Prometheus needed both the upstream Helm chart and custom manifests (recording rules, dashboards, PVs).
Solution: Used ArgoCD's multi-source feature to combine:
* Helm chart from `prometheus-community.github.io/helm-charts`
* Additional manifests from `codeberg.org/snonux/conf.git` at path `f3s/prometheus/manifests`
This keeps the upstream chart cleanly separated from custom configuration.
### Challenge 6: Sync Ordering for Prometheus
Prometheus resources have dependencies:
* PVs before PVCs
* Secrets before Prometheus Operator
* PrometheusRule CRDs before Prometheus Operator can process them
* Grafana must be running before the restart hook executes
Solution: Added sync wave annotations to all resources in `prometheus/manifests/`:
* Wave 0: PVs, RBAC
* Wave 1: Secrets, ConfigMaps
* Wave 3: PrometheusRule CRDs (recording rules)
* Wave 4: Dashboard ConfigMaps
* Wave 10: PostSync hook (Grafana restart)
ArgoCD deploys resources in wave order, ensuring correct sequencing.
## Justfile Evolution
The Justfiles evolved from deployment tools to utility scripts:
Before (Helm deployment):
install:
helm install miniflux ./helm-chart -n services
upgrade:
helm upgrade miniflux ./helm-chart -n services
uninstall:
helm uninstall miniflux -n services
After (ArgoCD utilities):
status:
@kubectl get pods -n services -l app=miniflux
@kubectl get application miniflux -n cicd -o jsonpath='Sync: {.status.sync.status}, Health: {.status.health.status}'
sync:
@kubectl annotate application miniflux -n cicd argocd.argoproj.io/refresh=normal --overwrite
argocd-status:
argocd app get miniflux --core
logs:
kubectl logs -n services -l app=miniflux --tail=100 -f
The Justfiles now provide:
* `status`: Quick health check
* `sync`: Force immediate ArgoCD sync (instead of waiting 3 minutes)
* `argocd-status`: Detailed ArgoCD application status
* `logs`: Tail application logs
* Application-specific utilities (e.g., `port-forward`, `restart`)
## Lessons Learned
1. Incremental migration is safer than big-bang: Migrating one app at a time allowed me to validate the pattern and fix issues before they affected all apps.
2. Start with simple apps: The first migration (simple services) established the basic pattern. Complex apps (Prometheus) came later after the pattern was proven.
3. Sync waves are essential for complex apps: Without sync waves, resources deployed in random order and caused failures. Proper ordering eliminated all deployment issues.
4. Multi-source is powerful: Combining upstream Helm charts with custom manifests keeps configuration clean and maintainable.
5. PostSync hooks replace manual steps: The Grafana restart hook eliminated a manual step that was easy to forget.
6. Documentation in Git is better than tribal knowledge: The Application manifests document exactly what's deployed and how. No more "let me check my shell history to remember how I deployed this."
7. Self-healing prevents configuration drift: Multiple times I've manually tweaked something for debugging, forgotten about it, and ArgoCD automatically reverted it back to the desired state.
8. ArgoCD Web UI is invaluable: Seeing the resource tree, sync status, and health status at a glance is much better than running multiple `kubectl` commands.
## Future Improvements
### 1. External Secrets Operator
Currently, secrets are manually created with `kubectl create secret`. This works but isn't declarative. Plan:
* Deploy External Secrets Operator (ESO)
* Store actual secrets in a Kubernetes Secret in a separate `secrets` namespace
* Create ExternalSecret CRDs that reference the backend secrets
* ArgoCD manages the ExternalSecret CRDs, ESO creates the actual Secrets
This makes secrets declarative while keeping them out of Git.
### 2. ApplicationSet for Similar Apps
Many apps have nearly identical Application manifests (miniflux, freshrss, wallabag, etc.). ArgoCD ApplicationSets can generate multiple Applications from a template:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: simple-services
namespace: cicd
spec:
generators:
- list:
elements:
- app: miniflux
- app: freshrss
- app: wallabag
template:
metadata:
name: '{{app}}'
spec:
project: default
source:
repoURL: https://codeberg.org/snonux/conf.git
targetRevision: master
path: 'f3s/{{app}}/helm-chart'
destination:
server: https://kubernetes.default.svc
namespace: services
syncPolicy:
automated:
prune: true
selfHeal: true
One ApplicationSet could replace 10+ individual Application manifests.
### 3. App-of-Apps Pattern
Currently, all Application manifests are applied manually with `kubectl apply -f argocd-apps/ -R`. An alternative is the "app-of-apps" pattern:
Create a root Application that deploys all other Applications. With the namespace-organized structure, this could be done per-namespace or for the entire cluster:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root
namespace: cicd
spec:
source:
repoURL: https://codeberg.org/snonux/conf.git
targetRevision: master
path: f3s/argocd-apps
directory:
recurse: true # Recursively find all manifests in subdirectories
destination:
server: https://kubernetes.default.svc
namespace: cicd
syncPolicy:
automated:
prune: true
selfHeal: true
Or create separate root apps per namespace:
root-monitoring.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: root-monitoring
namespace: cicd
spec:
source:
repoURL: https://codeberg.org/snonux/conf.git
targetRevision: master
path: f3s/argocd-apps/monitoring
destination:
server: https://kubernetes.default.svc
namespace: cicd
syncPolicy:
automated:
prune: true
selfHeal: true
Then disaster recovery becomes:
$ kubectl apply -f root-app.yaml
Root app deploys all 21 applications automatically
Or apply by namespace
$ kubectl apply -f root-monitoring.yaml
$ kubectl apply -f root-services.yaml
$ kubectl apply -f root-infra.yaml
### 4. ArgoCD Image Updater
For applications with custom Docker images (like the registry, tracing-demo), ArgoCD Image Updater can automatically update the image tag in Git when a new image is pushed:
metadata:
annotations:
argocd-image-updater.argoproj.io/image-list: |
app=registry.f3s.foo.zone/miniflux:~^v
argocd-image-updater.argoproj.io/write-back-method: git