| Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Add comprehensive section explaining how OpenBSD relayd and httpd
provide automatic failover when the f3s Kubernetes cluster is down.
New content covers:
- Relay-level vs protocol-level routing and why protocol rules don't support failover
- Health check mechanism and automatic table failover
- Correct relayd configuration with f3s first, localhost as backup
- httpd configuration with request rewrite for all paths
- Explanation of why request rewrite is needed to handle deep links
- Benefits of the automatic failover approach
This ensures visitors see a helpful status page instead of connection
errors when the home lab cluster is offline.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
|
|
Added screenshot reference for ZFS monitoring dashboard visualization:
- Path: ./f3s-kubernetes-with-freebsd-part-8b/grafana-zfs-dashboard.png
- Shows ZFS pool statistics and ARC cache metrics
- Placed after dashboard description, before deployment section
Screenshot will demonstrate:
- Pool capacity gauges and health status
- Dataset statistics table
- ARC cache hit rate and memory usage
- Cluster-wide ZFS statistics
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Renamed blog post file:
- From: DRAFT-f3s-kubernetes-with-freebsd-part-X-OBSERVABILITY2.gmi.tpl
- To: DRAFT-f3s-kubernetes-with-freebsd-part-8b.gmi.tpl
Updated screenshot path:
- From: ./f3s-observability-tempo/grafana-tempo-trace.png
- To: ./f3s-kubernetes-with-freebsd-part-8b/grafana-tempo-trace.png
This makes the post part 8b in the f3s-kubernetes-with-freebsd series.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Added link to grafana-tempo-trace.png showing the distributed trace waterfall
view in Grafana Tempo. The screenshot will demonstrate the Frontend → Middleware
→ Backend span chain with timing information.
Screenshot path: ./f3s-observability-tempo/grafana-tempo-trace.png
Also kept reference to X-RAG blog post for additional examples.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Instead of including screenshot placeholders, reference the X-RAG Observability
Hackathon blog post which already has Grafana Tempo screenshots showing:
- Trace waterfall visualization
- Service graph visualization
This provides readers with visual examples of how distributed traces appear in
Grafana UI without duplicating screenshots.
Link: https://foo.zone/gemfeed/2025-12-24-x-rag-observability-hackathon.html
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Replaced old trace example with verified working trace that shows complete
distributed tracing across all three services.
Changes:
- Updated curl command and response with actual working output
- New trace ID: 4be1151c0bdcd5625ac7e02b98d95bd5 (old: 4e8d5a25ae6f8f8d737b46625920fbb9)
- Added kubectl commands to search and fetch traces from Tempo API
- Documented complete trace structure with 8 spans across 3 services:
* Frontend: 3 spans (GET /api/process, frontend-process, POST) - 221ms
* Middleware: 3 spans (POST /api/transform, middleware-transform, GET) - 186ms
* Backend: 2 spans (GET /api/data, backend-get-data) - 104ms
- Added detailed span annotations explaining each span's role
- Included timing information showing distributed request flow
- Documented W3C Trace Context header propagation
This trace was generated after fixing health check noise by excluding /health
endpoints from instrumentation, which allows API traces to be properly exported
and visible in Tempo.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Updated the blog post to reflect the working datasource provisioning method
that was implemented after extensive debugging.
Changes:
- Replaced old sidecar-based approach (grafana_datasource label) with direct ConfigMap mounting
- Documented unified grafana-datasources-all.yaml containing all four datasources
- Explained direct mount to /etc/grafana/provisioning/datasources/ in persistence-values.yaml
- Noted this approach is simpler and more reliable than sidecar discovery
The old approach with ConfigMap labels did not work due to provisioning module issues.
The new approach follows the pattern from x-rag project and successfully provisions
all datasources (Prometheus, Alertmanager, Loki, Tempo) on Grafana startup.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
Added detailed example showing:
- Curl command to generate a distributed trace
- Full JSON response from the demo application
- Trace ID (4e8d5a25ae6f8f8d737b46625920fbb9) for viewing in Grafana
- Instructions for searching traces using TraceQL
- Placeholders for two screenshots (trace waterfall and service graph)
- Explanation of what the trace reveals about request flow
|
|
This blog post draft documents the integration of Grafana Tempo into the
f3s Kubernetes cluster's observability stack. It covers:
- Deploying Grafana Tempo in monolithic mode with OTLP receivers
- Configuring Grafana Alloy to collect and forward traces to Tempo
- Creating a three-tier Python demo application (Frontend → Middleware → Backend)
with OpenTelemetry instrumentation
- Correlating traces with logs (Loki) and metrics (Prometheus) in Grafana
- Using TraceQL to query and explore distributed traces
- Service graph visualization for understanding microservice dependencies
Part of the f3s FreeBSD + Kubernetes observability series.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|