summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--gemfeed/2026-02-22-taskwarrior-autonomous-agent-loop.gmi7
-rw-r--r--gemfeed/2026-02-22-taskwarrior-autonomous-agent-loop.gmi.tpl6
-rw-r--r--gemfeed/atom.xml9
3 files changed, 18 insertions, 4 deletions
diff --git a/gemfeed/2026-02-22-taskwarrior-autonomous-agent-loop.gmi b/gemfeed/2026-02-22-taskwarrior-autonomous-agent-loop.gmi
index 2fc6bbc1..d4c77e4b 100644
--- a/gemfeed/2026-02-22-taskwarrior-autonomous-agent-loop.gmi
+++ b/gemfeed/2026-02-22-taskwarrior-autonomous-agent-loop.gmi
@@ -26,6 +26,7 @@ I let Ampcode autonomously complete 48 Taskwarrior tasks on my eBPF project in a
* ⇢ ⇢ ⇢ 4-annotate-update-task.md — progress tracking
* ⇢ ⇢ ⇢ 5-review-overview-tasks.md — picking the next task
* ⇢ ⇢ The reflection and review loop
+* ⇢ ⇢ Code review: human spot-check at the end
* ⇢ ⇢ Measurable results
* ⇢ ⇢ A real bug found by the review loop
* ⇢ ⇢ Gotchas and lessons learned
@@ -414,6 +415,10 @@ Having instructed in the skill for the agent to reflect on its own implementatio
The sub-agent reviews consistently caught things the main agent missed — tests that only asserted on mocks, missing edge cases, and even a real bug. Without the dual review loop, the agent tends to write tests that look correct but do not actually exercise real behavior.
+## Code review: human spot-check at the end
+
+On top of the agent's self-reflection and the two sub-agent reviews per task, I reviewed the produced outcome at the end. I did not read through all 5k lines one by one. Instead I looked for repeating patterns across the test files and cherry-picked a few scenarios — for example one integration test from the open/close family, one from the rename/link family, and one negative test — and went through those in detail manually. That was enough to satisfy me that the workflow had produced consistent, runnable tests and that the whole pipeline (task → implement → self-review → sub-agent review → fix → second review → commit) was working as intended.
+
## Measurable results
Here is what one day of autonomous Ampcode work produced:
@@ -422,7 +427,7 @@ Here is what one day of autonomous Ampcode work produced:
* 48 Taskwarrior tasks completed
* 47 git commits
* 87 files changed
-* 12,012 lines added, 1,543 removed
+* ~5,000 lines added, ~500 removed
* 18 integration test files
* 15 workload scenario files (one per syscall category)
* 93 test scenarios total (happy-path and negative)
diff --git a/gemfeed/2026-02-22-taskwarrior-autonomous-agent-loop.gmi.tpl b/gemfeed/2026-02-22-taskwarrior-autonomous-agent-loop.gmi.tpl
index a067d40d..c62e92e1 100644
--- a/gemfeed/2026-02-22-taskwarrior-autonomous-agent-loop.gmi.tpl
+++ b/gemfeed/2026-02-22-taskwarrior-autonomous-agent-loop.gmi.tpl
@@ -390,6 +390,10 @@ Having instructed in the skill for the agent to reflect on its own implementatio
The sub-agent reviews consistently caught things the main agent missed — tests that only asserted on mocks, missing edge cases, and even a real bug. Without the dual review loop, the agent tends to write tests that look correct but do not actually exercise real behavior.
+## Code review: human spot-check at the end
+
+On top of the agent's self-reflection and the two sub-agent reviews per task, I reviewed the produced outcome at the end. I did not read through all 5k lines one by one. Instead I looked for repeating patterns across the test files and cherry-picked a few scenarios — for example one integration test from the open/close family, one from the rename/link family, and one negative test — and went through those in detail manually. That was enough to satisfy me that the workflow had produced consistent, runnable tests and that the whole pipeline (task → implement → self-review → sub-agent review → fix → second review → commit) was working as intended.
+
## Measurable results
Here is what one day of autonomous Ampcode work produced:
@@ -398,7 +402,7 @@ Here is what one day of autonomous Ampcode work produced:
* 48 Taskwarrior tasks completed
* 47 git commits
* 87 files changed
-* 12,012 lines added, 1,543 removed
+* ~5,000 lines added, ~500 removed
* 18 integration test files
* 15 workload scenario files (one per syscall category)
* 93 test scenarios total (happy-path and negative)
diff --git a/gemfeed/atom.xml b/gemfeed/atom.xml
index 4b89d2cf..1b028d67 100644
--- a/gemfeed/atom.xml
+++ b/gemfeed/atom.xml
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
- <updated>2026-02-21T23:24:01+02:00</updated>
+ <updated>2026-02-21T23:38:45+02:00</updated>
<title>foo.zone feed</title>
<subtitle>To be in the .zone!</subtitle>
<link href="gemini://foo.zone/gemfeed/atom.xml" rel="self" />
@@ -47,6 +47,7 @@
<li>⇢ ⇢ <a href='#4-annotate-update-taskmd--progress-tracking'>4-annotate-update-task.md — progress tracking</a></li>
<li>⇢ ⇢ <a href='#5-review-overview-tasksmd--picking-the-next-task'>5-review-overview-tasks.md — picking the next task</a></li>
<li>⇢ <a href='#the-reflection-and-review-loop'>The reflection and review loop</a></li>
+<li>⇢ <a href='#code-review-human-spot-check-at-the-end'>Code review: human spot-check at the end</a></li>
<li>⇢ <a href='#measurable-results'>Measurable results</a></li>
<li>⇢ <a href='#a-real-bug-found-by-the-review-loop'>A real bug found by the review loop</a></li>
<li>⇢ <a href='#gotchas-and-lessons-learned'>Gotchas and lessons learned</a></li>
@@ -437,6 +438,10 @@ tasks are in progress, show the next actionable (READY) task:
<br />
<span>The sub-agent reviews consistently caught things the main agent missed — tests that only asserted on mocks, missing edge cases, and even a real bug. Without the dual review loop, the agent tends to write tests that look correct but do not actually exercise real behavior.</span><br />
<br />
+<h2 style='display: inline' id='code-review-human-spot-check-at-the-end'>Code review: human spot-check at the end</h2><br />
+<br />
+<span>On top of the agent&#39;s self-reflection and the two sub-agent reviews per task, I reviewed the produced outcome at the end. I did not read through all 5k lines one by one. Instead I looked for repeating patterns across the test files and cherry-picked a few scenarios — for example one integration test from the open/close family, one from the rename/link family, and one negative test — and went through those in detail manually. That was enough to satisfy me that the workflow had produced consistent, runnable tests and that the whole pipeline (task → implement → self-review → sub-agent review → fix → second review → commit) was working as intended.</span><br />
+<br />
<h2 style='display: inline' id='measurable-results'>Measurable results</h2><br />
<br />
<span>Here is what one day of autonomous Ampcode work produced:</span><br />
@@ -446,7 +451,7 @@ tasks are in progress, show the next actionable (READY) task:
<li>48 Taskwarrior tasks completed</li>
<li>47 git commits</li>
<li>87 files changed</li>
-<li>12,012 lines added, 1,543 removed</li>
+<li>~5,000 lines added, ~500 removed</li>
<li>18 integration test files</li>
<li>15 workload scenario files (one per syscall category)</li>
<li>93 test scenarios total (happy-path and negative)</li>