Chapter 3
15 min read
Section 21 of 175

The Feedback Loop Pattern

How Claude Code Works

Introduction

The feedback loop is what separates agents from one-shot generation. Claude Code doesn't just generate code - it executes, observes results, and iterates. This pattern is essential for reliable, self-correcting behavior.

The Key Insight: LLMs make mistakes. The feedback loop lets them recover. An agent that can recognize and fix its errors is far more valuable than one that just hopes to get it right the first time.

What is the Feedback Loop

The feedback loop is the cycle of: Act → Observe → Adjust → Repeat

📝feedback_loop.txt
1Traditional LLM:
2User Request → Generate Response → Done
3
4Agentic Feedback Loop:
5User Request → Generate Action → Execute → Observe Result →
67    └─ Success? → Move to next action
8    └─ Failure? → Analyze error → Adjust approach → Try again

Components of the Feedback Loop

ComponentPurposeExample
ActionWhat the agent doesWrite file, run command
ObservationWhat happenedError message, test output
AnalysisWhat it meansType error on line 42
AdjustmentHow to fix itChange variable type

Implementation Patterns

Pattern 1: Observe-Then-Act

🐍observe_then_act.py
1class FeedbackLoop:
2    """Core feedback loop implementation."""
3
4    def execute_with_feedback(
5        self,
6        action: Action,
7        max_retries: int = 3,
8    ) -> ActionResult:
9        """Execute action with feedback-driven retries."""
10
11        for attempt in range(max_retries):
12            # Execute the action
13            result = self.execute(action)
14
15            # Observe the outcome
16            observation = self.observe(result)
17
18            # Check if successful
19            if observation.success:
20                return result
21
22            # Analyze the failure
23            analysis = self.analyze_failure(action, observation)
24
25            # Adjust the action based on analysis
26            action = self.adjust_action(action, analysis)
27
28            # Log for debugging
29            self.log(f"Attempt {attempt + 1} failed: {analysis.reason}")
30            self.log(f"Adjusting: {analysis.suggested_fix}")
31
32        # All retries exhausted
33        return self.handle_persistent_failure(action, observation)
34
35    def observe(self, result: ActionResult) -> Observation:
36        """Analyze the result of an action."""
37        return Observation(
38            success=result.returncode == 0,
39            output=result.stdout,
40            error=result.stderr,
41            metrics={
42                "duration": result.duration,
43                "output_length": len(result.stdout),
44            },
45        )
46
47    def analyze_failure(
48        self,
49        action: Action,
50        observation: Observation,
51    ) -> FailureAnalysis:
52        """Use LLM to analyze why action failed."""
53        prompt = f"""
54Action attempted: {action.description}
55Command: {action.command}
56Error output: {observation.error}
57
58Analyze:
591. What went wrong?
602. Is this recoverable?
613. What should be tried instead?
62"""
63        response = self.llm.generate(prompt)
64        return self.parse_analysis(response.text)

Pattern 2: Continuous Verification

🐍continuous_verification.py
1class VerifiedExecution:
2    """Execute actions with continuous verification."""
3
4    def execute_verified(self, action: Action) -> VerifiedResult:
5        """Execute and verify in a single flow."""
6
7        # Execute the action
8        result = self.execute(action)
9
10        # Immediately verify
11        verification = self.verify(action, result)
12
13        if not verification.passed:
14            # Generate fix based on verification failure
15            fix = self.generate_fix(action, verification)
16
17            # Apply fix and re-verify
18            fix_result = self.execute(fix)
19            verification = self.verify(fix, fix_result)
20
21        return VerifiedResult(
22            action=action,
23            result=result,
24            verification=verification,
25        )
26
27    def verify(self, action: Action, result: ActionResult) -> Verification:
28        """Verify the action succeeded as expected."""
29
30        checks = []
31
32        # Check 1: No errors
33        if result.returncode != 0:
34            checks.append(VerificationCheck(
35                name="no_errors",
36                passed=False,
37                reason=f"Exit code: {result.returncode}",
38            ))
39
40        # Check 2: For file writes, verify file exists
41        if action.type == "write_file":
42            file_exists = Path(action.params["path"]).exists()
43            checks.append(VerificationCheck(
44                name="file_exists",
45                passed=file_exists,
46                reason="File was created" if file_exists else "File not found",
47            ))
48
49        # Check 3: For code changes, run type checker
50        if action.affects_code:
51            typecheck = self.run_typecheck()
52            checks.append(VerificationCheck(
53                name="types_valid",
54                passed=typecheck.success,
55                reason=typecheck.output[:500],
56            ))
57
58        return Verification(
59            passed=all(c.passed for c in checks),
60            checks=checks,
61        )

Error Recovery

Different types of errors require different recovery strategies:

Error TypeRecovery StrategyExample
Syntax errorParse error, fix specific issueMissing semicolon
Type errorAdjust types based on messageType 'string' not assignable to 'number'
Runtime errorAdd error handling or fix logicCannot read property of undefined
Test failureAnalyze test output, fix codeExpected 5 but got 4
PermissionRequest permission or use alternativeEACCES permission denied
🐍error_recovery.py
1class ErrorRecovery:
2    """Intelligent error recovery strategies."""
3
4    def recover(self, error: str, context: dict) -> RecoveryAction:
5        """Determine recovery strategy based on error type."""
6
7        error_type = self.classify_error(error)
8
9        strategies = {
10            "syntax": self.recover_syntax,
11            "type": self.recover_type,
12            "runtime": self.recover_runtime,
13            "test_failure": self.recover_test,
14            "permission": self.recover_permission,
15            "not_found": self.recover_not_found,
16        }
17
18        strategy = strategies.get(error_type, self.recover_generic)
19        return strategy(error, context)
20
21    def recover_type(self, error: str, context: dict) -> RecoveryAction:
22        """Recover from TypeScript type errors."""
23
24        # Parse the type error
25        match = re.search(
26            r"Type '(.+)' is not assignable to type '(.+)'",
27            error
28        )
29
30        if match:
31            actual_type, expected_type = match.groups()
32            return RecoveryAction(
33                strategy="fix_type",
34                suggestion=f"Change type from {actual_type} to {expected_type}",
35                auto_fix=self.can_auto_fix_type(actual_type, expected_type),
36            )
37
38        return RecoveryAction(
39            strategy="manual_review",
40            suggestion="Type error requires manual analysis",
41            auto_fix=False,
42        )
43
44    def recover_test(self, error: str, context: dict) -> RecoveryAction:
45        """Recover from test failures."""
46
47        # Analyze test output
48        analysis = self.llm.generate(f"""
49Test failure output:
50{error}
51
52Context:
53{context}
54
55Analyze:
561. What test failed?
572. What was expected vs actual?
583. Is this a test bug or code bug?
594. Suggested fix?
60""")
61
62        return RecoveryAction(
63            strategy="fix_from_analysis",
64            suggestion=analysis,
65            auto_fix=True,  # Can attempt auto-fix based on analysis
66        )

Self-Verification

Claude Code verifies its own work using multiple strategies:

🐍self_verification.py
1class SelfVerification:
2    """Verify agent's own work."""
3
4    def verify_code_change(self, change: CodeChange) -> VerificationResult:
5        """Comprehensive verification of a code change."""
6
7        results = []
8
9        # 1. Type checking
10        typecheck = self.run_typecheck()
11        results.append(("types", typecheck))
12
13        # 2. Linting
14        lint = self.run_linter()
15        results.append(("lint", lint))
16
17        # 3. Related tests
18        tests = self.run_related_tests(change.files)
19        results.append(("tests", tests))
20
21        # 4. Build check
22        build = self.run_build()
23        results.append(("build", build))
24
25        # 5. LLM review (catch logical issues)
26        review = self.llm_review(change)
27        results.append(("review", review))
28
29        return VerificationResult(
30            passed=all(r[1].success for r in results),
31            checks=results,
32        )
33
34    def llm_review(self, change: CodeChange) -> ReviewResult:
35        """Have LLM review the change for logical issues."""
36        prompt = f"""
37Review this code change:
38
39Files changed: {change.files}
40
41Diff:
42{change.diff}
43
44Check for:
451. Logic errors
462. Missing edge cases
473. Security issues
484. Performance problems
495. Consistency with existing code
50
51Return: APPROVED or CONCERNS with explanation.
52"""
53        response = self.llm.generate(prompt)
54        return self.parse_review(response.text)

Trust but Verify

Self-verification isn't perfect. LLMs can miss issues their automated checks catch, and vice versa. Use multiple verification methods for important changes.

Learning from Feedback

The feedback loop also enables learning within a session:

🐍learning_from_feedback.py
1class AdaptiveAgent:
2    """Agent that learns from feedback within session."""
3
4    def __init__(self):
5        self.error_patterns = []
6        self.successful_strategies = []
7
8    def learn_from_result(self, action: Action, result: ActionResult):
9        """Extract learnings from action result."""
10
11        if result.success:
12            # Remember what worked
13            self.successful_strategies.append({
14                "context": action.context,
15                "approach": action.approach,
16                "outcome": "success",
17            })
18        else:
19            # Remember what didn't work
20            self.error_patterns.append({
21                "context": action.context,
22                "approach": action.approach,
23                "error": result.error,
24                "recovery": result.recovery_action,
25            })
26
27    def apply_learnings(self, new_action: Action) -> Action:
28        """Apply session learnings to new action."""
29
30        # Check if similar context had errors before
31        for pattern in self.error_patterns:
32            if self.is_similar_context(pattern["context"], new_action.context):
33                # Avoid the approach that failed
34                if pattern["approach"] == new_action.approach:
35                    new_action = self.suggest_alternative(
36                        new_action,
37                        avoid=pattern["approach"],
38                        try_instead=pattern.get("recovery"),
39                    )
40
41        # Apply successful strategies
42        for strategy in self.successful_strategies:
43            if self.is_similar_context(strategy["context"], new_action.context):
44                # Prefer approaches that worked
45                new_action = self.incorporate_strategy(new_action, strategy)
46
47        return new_action

Session Learning

Claude Code learns within a session but doesn't persist learnings across sessions. For persistent learning, use CLAUDE.md to document discovered patterns.

Summary

The feedback loop pattern:

  1. Cycle: Act → Observe → Analyze → Adjust → Repeat
  2. Verification: Multiple checks (types, tests, build, review)
  3. Recovery: Strategy selection based on error type
  4. Learning: Adapt within session based on results
  5. Reliability: Self-correcting behavior improves success rate
Next: Let's explore the Claude Agent SDK, which provides a programmatic way to build agents with these patterns.