The Problem
You're exploring a new API. You don't know what works yet. You try something, it fails. You adjust, try again. Each attempt takes 5 minutes to write a test file, run it, see results, edit, repeat. The feedback loop is too slow. By the time you see the error, you've forgotten what you tried.
Slow feedback kills exploration.
In a Python REPL, you type code, hit enter, see results instantly. That tight loop lets you build understanding through rapid experimentation. You need the same pattern for AI-assisted coding.
The Core Insight
Treat your AI like a REPL: ask → get answer → adjust → ask again, in seconds not minutes.
The traditional REPL cycle: Read-Eval-Print-Loop. You enter code, it evaluates, prints result, you loop back.
The AI REPL cycle: Prompt-Generate-Test-Loop. You describe what you want, AI generates code, you test it, you loop back with feedback.
The key: minimize time between prompt and verification. Fast loops build intuition.
The Walkthrough
Pattern 1: Single-Function REPL
Exploring how to use a new library function. Traditional approach: read docs, write test file, run, debug. REPL approach:
# Iteration 1
You: "Show me how to parse JSON from a file using Python's json module"
AI: [code snippet]
You: Run it → Error: file not found
# Iteration 2
You: "Same but handle FileNotFoundError"
AI: [updated snippet]
You: Run it → Works!
# Iteration 3
You: "Now parse nested JSON with error handling for malformed data"
AI: [enhanced snippet]
You: Run it → Works with edge cases
Three iterations in under 2 minutes. Each builds on the last. You're learning the API through doing, not reading.
Pattern 2: Architecture Exploration
You're designing a new feature. You don't know the best structure yet. Use REPL to try different approaches:
# Iteration 1: Simple approach
You: "Create a cache decorator for API calls"
AI: [simple in-memory dict cache]
You: Test → Works, but loses data on restart
# Iteration 2: Add persistence
You: "Same but persist to disk"
AI: [pickle-based cache]
You: Test → Works, but not thread-safe
# Iteration 3: Production-ready
You: "Same but thread-safe and with TTL"
AI: [Redis-based cache with expiry]
You: Test → Perfect
You evolved the design through iteration, not upfront planning. Each step validated before moving forward.
Pattern 3: Debugging REPL
Bug in complex logic. Instead of reading code, REPL your way to understanding:
# Step 1: Isolate
You: "Extract the calculate_discount function and show example inputs/outputs"
AI: [isolated function with examples]
You: "Input: price=100, user_tier='gold' → Output should be 80"
# Step 2: Test edge case
You: "What happens with price=100, user_tier='platinum'?"
AI: "Would return 75 (25% discount)"
You: "That's the bug - platinum should be 20%, not 25%"
# Step 3: Fix
You: "Fix the platinum tier discount to 20%"
AI: [corrected code]
You: Test → Fixed
You didn't read the whole codebase. You REPLed your way to the bug.
Building REPL Muscle Memory
The skill is knowing what question to ask next. Train your intuition:
| Result | Next Prompt Pattern |
|---|---|
| Works perfectly | "Add [next feature]" |
| Error message | "Fix this error: [paste error]" |
| Works but seems fragile | "Add error handling for [edge case]" |
| Works but too slow | "Optimize this for [constraint]" |
| Confusing code | "Explain what this does step-by-step" |
Real Example: Exploring a New Framework
You need to learn FastAPI. Traditional: read docs for an hour, then code. REPL: learn by doing.
Iteration 1: Hello World
You: "Create a minimal FastAPI app with one GET endpoint"
AI:
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def root():
return {"message": "Hello World"}
You: Run → Works. Understand: decorators define routes, return dicts become JSON
Iteration 2: Add Complexity
You: "Add a POST endpoint that accepts JSON and returns it"
AI: [adds POST endpoint with pydantic model]
You: Test with curl → Works. Understand: Pydantic models for validation
Iteration 3: Real Feature
You: "Add user authentication with JWT tokens"
AI: [adds auth dependency, JWT logic]
You: Test → Error: secret key undefined
You: "Move secret to environment variable"
AI: [updated with os.getenv]
You: Test → Works. Understand: dependencies for shared logic
Total time: 15 minutes. You learned FastAPI basics through 10+ tiny iterations, not 2 hours of reading.
Why This Works
Your brain learns better from doing + immediate feedback than from reading + delayed application. Each REPL cycle is a learning moment.
Failure Patterns
1. Context Loss Between Iterations
Symptom: AI's 5th iteration forgot what you did in iteration 1.
Fix: Summarize state in each prompt: "Given this code [snippet], now add [feature]"
# Bad (loses context)
Iteration 1: "Create a parser"
Iteration 5: "Add error handling" # AI doesn't remember the parser
# Good (maintains context)
Iteration 5: "For this parser [paste code], add error handling for [case]"
2. Not Testing Each Iteration
Symptom: You ask for 5 changes, AI implements all, 3 are broken, you don't know which.
Fix: Test after EVERY iteration. One change at a time.
3. Too-Big Iterations
Symptom: "Add auth, database, caching, and logging" returns mess.
Fix: Smaller steps. "Add auth" → test → "Add database" → test → etc.
4. No Mental Model Updates
Symptom: You've done 20 iterations but can't explain how the code works.
Fix: Every 3-5 iterations, ask: "Explain how this works now"
When REPL Style Fails
- Large refactors: Too many interdependencies to change incrementally
- Complex architecture: Need upfront design, not exploration
- Production bugs: Can't afford trial-and-error (use OODA instead)
- Security-critical code: Must be correct first time, not iterated
Advanced REPL Patterns
The Checklist REPL
Use AI to generate a checklist, then REPL through each item:
You: "I need to build a user registration API. Give me a checklist of what to implement"
AI:
1. POST /register endpoint
2. Password hashing
3. Email validation
4. Duplicate user check
5. Send confirmation email
6. Token generation
You: "Implement #1"
AI: [code for endpoint]
You: Test → Works
You: "Implement #2"
AI: [adds bcrypt hashing]
You: Test → Works
[Continue through checklist]
The Type-Driven REPL
Start with types, REPL the implementation:
You: "Define TypeScript types for a shopping cart system"
AI: [types for Cart, Item, User, Order]
You: "Implement addItem function matching these types"
AI: [implementation]
You: Test → Type error on discount calculation
You: "Fix type error in discount calculation"
AI: [corrected types and logic]
You: Test → Passes
The Test-First REPL
Generate tests, then REPL until they pass:
You: "Write tests for a URL shortener"
AI: [test suite]
You: Run → All fail (no implementation)
You: "Implement code to make test_create_short_url pass"
AI: [implementation]
You: Run → 1 pass, 4 fail
You: "Make test_retrieve_original_url pass"
AI: [updated implementation]
You: Run → 2 pass, 3 fail
[Continue until all tests pass]
Optimizing Your REPL Speed
Hotkeys and Snippets
# Save common REPL prompts as snippets
Snippet: "add-error" → "Add error handling for [cursor here]"
Snippet: "explain" → "Explain how this code works: [paste]"
Snippet: "fix-error" → "Fix this error: [paste traceback]"
Snippet: "optimize" → "Optimize this for [paste perf constraint]"
Parallel REPLs
Run multiple exploration threads simultaneously:
# Terminal 1: Exploring auth approach A
[REPL iterations on JWT]
# Terminal 2: Exploring auth approach B
[REPL iterations on session-based]
# Compare results, pick winner
REPL State Management
# After each successful iteration, snapshot state
git add . && git commit -m "REPL: working auth basic"
# If next iteration breaks things
git reset --hard HEAD # Back to last working state
# Try different approach
[new REPL iteration]
Quick Reference
REPL Loop Structure:
while not satisfied:
prompt = describe_next_step()
code = ai.generate(prompt)
result = test(code)
if result.works:
commit_state()
continue
else:
prompt = f"Fix: {result.error}"
Good REPL Prompts:
- "Show me how to [specific task]" (exploration)
- "Add [one feature] to this code: [paste]" (iteration)
- "Fix this error: [paste]" (debugging)
- "Explain what this does: [paste]" (understanding)
- "Make this faster/safer/cleaner: [paste]" (refinement)
REPL Workflow:
| Phase | Action | Time Budget |
|---|---|---|
| Prompt | Ask for next small change | 10 seconds |
| Generate | AI provides code | 5-10 seconds |
| Test | Run and verify | 10-30 seconds |
| Loop | Decide next iteration | 5 seconds |
Target: 1 minute per iteration or less
Exit Conditions:
- Feature works as intended
- Hit a blocker (switch to OODA for strategic thinking)
- Losing track of changes (commit and consolidate)
- Code quality degrades (time to refactor deliberately)