The Problem
You've been debugging the same issue for 3 hours. You've tried 12 AI suggestions. Each one seems plausible, but nothing works. You're cycling through random changes, hoping something sticks. You've lost track of what you've tried. The AI keeps suggesting things you've already ruled out.
You're thrashing. You need structure.
Complex bugs require systematic thinking, not random walk through solution space. You need a decision-making framework that works with AI assistance.
The Core Insight
OODA loops (Observe-Orient-Decide-Act) turn chaotic debugging into deliberate hypothesis testing.
OODA comes from military strategy - fighter pilots use it to outmaneuver opponents. The pattern: gather information, build situational awareness, choose a course of action, execute it, then loop back.
With AI debugging, you're not fighting an opponent - you're fighting your own confusion. OODA helps you build a mental model, test it, and update it based on results.
The Walkthrough
Phase 1: Observe (Gather Raw Data)
Before asking AI for solutions, collect evidence:
- Error messages: Full stack traces, not summaries
- Reproduction steps: Exact sequence that triggers the bug
- Expected vs actual: What should happen, what does happen
- Environment: OS, versions, configuration
- Recent changes: What was modified before the bug appeared
Create an observation log:
# Bug Observation Log
## Error
TypeError: Cannot read property 'id' of undefined
at processPayment (checkout.js:42)
at handleSubmit (checkout.js:18)
## Reproduction
1. User with store credit > cart total
2. Click "Checkout"
3. Error thrown before payment processing
## Expected
Should deduct store credit, process remaining via card
## Actual
Crashes on undefined object access
## Environment
- Node 18.2
- Express 4.18
- Last working: commit a3f2b1c (2 days ago)
- Breaking change: commit d8e9f0a (added store credit feature)
Why Observation Matters
AI can only be as good as the context you provide. A well-structured observation log gives AI the right starting data and prevents it from guessing blindly.
Phase 2: Orient (Build Understanding)
Use AI to help build a mental model of what's happening:
# Prompt Template
"Given this error and context:
[paste observation log]
Help me build a mental model:
1. What is the execution path to the error?
2. What assumptions might be violated?
3. What are 3 possible root causes, ranked by likelihood?
4. What additional data would narrow this down?"
The AI's response helps you orient - form hypotheses based on the data:
| Hypothesis | Likelihood | Evidence Needed |
|---|---|---|
| Payment object undefined when store credit covers full amount | High | Log payment object state |
| Store credit deduction happens too early | Medium | Check execution order |
| Validation skipped for zero-balance carts | Low | Review validation logic |
Key point: Don't jump to solutions yet. Orient is about understanding, not fixing.
Phase 3: Decide (Choose a Test)
Now pick ONE hypothesis to test. Not three. Not "try everything." One.
# Decision Process
Hypothesis: Payment object undefined when store credit covers full amount
Test: Add logging before line 42 to inspect payment object state
Expected result: If hypothesis correct, payment should be undefined
Time budget: 10 minutes
Success criteria: Log shows payment state, confirms or refutes hypothesis
Ask AI for the minimal code change to test this:
# Prompt
"I want to test if payment object is undefined when store credit >= cart total.
Add logging to checkout.js line 40-42 to inspect payment object state.
Minimal change only - no fixes yet."
Phase 4: Act (Execute the Test)
Run the test. Document results:
# Test Results
Ran: Added console.log(payment) at line 41
Result: payment is undefined when credit >= total
Hypothesis: CONFIRMED
New information: Payment object not created if credit covers everything
# Updated mental model
Flow: calculate_total() → check_store_credit() → [skip payment creation] → process_payment(undefined)
Root cause: Logic assumes payment always exists, but store credit path skips it
Loop back to Observe: You have new data. Update your observation log and start the loop again.
Real Example: Full OODA Cycle
Loop 1: Discovery
Observe: Error at checkout.js:42, users with store credit only
Orient: Three hypotheses formed, payment object undefined seems likely
Decide: Add logging to verify payment state
Act: Confirmed - payment is undefined
Loop 2: Root Cause
Observe: Payment undefined when store credit >= total
Orient: Logic assumes payment always exists, but store credit path skips creation
Decide: Trace code to find where payment should be created
Act: Found - createPaymentIntent() only called if remaining_balance > 0
Loop 3: Fix Design
Observe: createPaymentIntent() skipped, but processPayment() expects it
Orient: Need to either: (1) Create placeholder payment, or (2) Skip processPayment() if balance is zero
Decide: Option 2 cleaner - add early return in processPayment()
Act: Implement fix, test, verify
// Fix
function processPayment(payment, order) {
// Early return for store-credit-only orders
if (!payment) {
console.log('No payment needed - covered by store credit');
return { success: true, method: 'store_credit' };
}
// Original payment processing logic
return stripe.processPayment(payment);
}
Failure Patterns
1. Skipping Orient (Jumping to Solutions)
Symptom: You ask AI "how do I fix this?" before understanding what "this" is.
Fix: Force yourself to articulate the mental model before asking for solutions.
2. Testing Multiple Hypotheses at Once
Symptom: You change 3 things, something works, but you don't know what fixed it.
Fix: One hypothesis per loop. Always.
3. Not Documenting Results
Symptom: You try the same failed solution twice because you forgot.
Fix: Keep a running log. Commit it to a debug branch if needed.
4. Giving Up on Orient Too Early
Symptom: "I don't know what's wrong, I'll just try random stuff."
Fix: If you can't orient, go back to observe. Get more data.
When OODA Is Overkill
OODA is for complex debugging. For simple bugs (typos, obvious logic errors), just fix them. Don't force structure where speed wins.
AI Prompts for Each OODA Phase
Observe Phase Prompts
"Help me gather debugging information for this error:
[error message]
What else should I collect to understand this?"
Orient Phase Prompts
"Given this context:
[observation log]
Generate 3 hypotheses for the root cause, ranked by likelihood.
For each, explain what evidence would confirm or refute it."
Decide Phase Prompts
"I want to test this hypothesis:
[hypothesis]
What's the minimal code change or logging to test this?
No fixes - just diagnostic code."
Act Phase Prompts
"I tested [hypothesis] and got this result:
[test results]
Update the mental model. What should I test next?"
Quick Reference
OODA Loop Summary:
| Phase | Goal | AI Role | Output |
|---|---|---|---|
| Observe | Gather data | Suggest what to collect | Observation log |
| Orient | Build mental model | Generate hypotheses | Ranked hypotheses |
| Decide | Pick one test | Design minimal test | Test plan |
| Act | Execute & document | Interpret results | Updated model |
Loop Exit Conditions:
- Bug is fixed and verified
- Root cause identified (even if not fixable now)
- Time budget exhausted (escalate or defer)
- Hypothesis space exhausted (need fresh perspective)
OODA vs Random Debugging:
# Random Debugging (don't do this)
"Try this fix" → doesn't work
"Try that fix" → doesn't work
"Maybe this?" → doesn't work
[repeat until frustrated]
# OODA Debugging
Observe → Orient → Decide → Act
[new data] → Observe → Orient → Decide → Act
[understanding deepens] → Observe → Orient → Decide → Act
[root cause found] → Fix applied