// agents.output.length → ∞ · human.attention.capacity → 4–7 items

Skimming & Scanning

A developer's survival skill in the age of AI agents
Lenora AI workflows · march 2026
1

The Problem

AI agents generate thousands of lines — reading it all is not an option
1,000s
of lines generated
per agent run
~23
LOC per PR average
(Huang et al. 2026)
But ignoring it is risky. Bad code quietly compounds across turns — repetitive patterns hide in volume, unmaintainable design creeps in unnoticed, and the later you catch it, the more expensive the fix.
  • Bad code quietly compounds across turns
  • Repetitive patterns hide in volume — invisible at speed
  • Unmaintainable design creeps in unnoticed
  • The later you catch it, the more expensive the fix
Insight clave: AI agents produce 1.87× more code redundancy than humans but reviewers react more positively to AI code — a dangerous blind spot (Huang et al. 2026).
Referencias
  • Paper Huang, H. et al. (2026)"More Code, Less Reuse: AI-generated PR Quality" MSR. PDF
  • Paper Sadowski, C. et al. (2018)"Modern Code Review: A Case Study at Google" ICSE-SEIP. ACM DL
2

Your Brain on Agent Output

What cognitive science tells us about human processing limits
4–7
items held in working memory at once
~100
WPM reading technical code
vs. ~250 WPM prose · Sweller (1988)
0
selective attention under cognitive overload
Broadbent (1958) — Filter Theory
Working memory caps at 4–7 items (refined to 4±1 for complex tasks). Technical code reads at ~100 WPM versus ~250 WPM for narrative prose. Under cognitive overload, selective attention collapses entirely — your brain's filter stops working.
Insight clave: You physically cannot process 1,000+ lines of agent output. The question is not whether to skip — it's how to skip intelligently.
Referencias
  • Paper Miller, G.A. (1956)"The Magical Number Seven, Plus or Minus Two" Psychological Review, 63(2). Scholar
  • Paper Cowan, N. (2001)"The magical number 4 in short-term memory" Behavioral & Brain Sciences, 24(1). Scholar
  • Paper Sweller, J. (1988)"Cognitive load during problem solving" Cognitive Science, 12(2). Scholar
  • Book Broadbent, D.E. (1958)"Perception and Communication" Pergamon Press.
3

What We're Looking For

Bad patterns hide in volume — these are the red flags to detect early

Repetitive Code

The agent copies logic instead of abstracting it. DRY violations that feel "good enough" because they work — until you need to change them across 12 copies.

handleUserCreate() { // 80 lines }
handleUserUpdate() { // 78 lines }  // 80% identical
handleUserDelete() { // 75 lines }  // still 80%

Non-Maintainable Design

Deeply nested logic, missing separation of concerns, functions that do too much. Technically correct, practically a nightmare to extend or test.

processAll(data) {
  // 200 lines
  // 10 responsibilities
  // good luck changing this
}

Compounding Patterns

A small bad decision in turn 1 becomes a hard-wired convention by turn 12. Agents repeat and amplify their own prior outputs.

// turn 1:  bad naming convention
// turn 6:  same bad names everywhere
// turn 12: refactoring costs 3×
Insight clave: AI agents generate Type-4 semantic clones — functionally identical code with different syntax. Existing clone detection tools miss them entirely (Huang et al. 2026).
Referencias
  • Paper Huang, H. et al. (2026)"More Code, Less Reuse" — AMR 0.2867 (agents) vs 0.1532 (humans). PDF
  • Paper Liu, N.F. et al. (2024)"Lost in the Middle" — context degradation amplifies repetition. PDF
4

Two Tools, One Goal

Skimming and scanning are complementary — not interchangeable
Technique 1

Skimming

Get the big picture fast
  • Read at speed to assess overall direction
  • Catch structural problems: does this make sense at all?
  • Identify if the agent is going off track
  • ~10–20 seconds per output block is enough
"Is the agent on the right path?"
Technique 2

Scanning

Hunt for specific signals
  • Move eyes fast looking for known red flags
  • Target keywords: repeated names, long functions, God objects
  • Jump across locations — not linear reading
  • You know bad patterns — scan for their fingerprints
"Is a bad pattern hiding in here?"
Insight clave: Skimming answers "is this going in the right direction?" — scanning answers "is something wrong hiding in here?" Use them in sequence, not interchangeably.
Referencias
  • Paper Sweller, J. (1988)"Cognitive Load Theory" — cognitive capacity constrains review strategy. Scholar
  • Paper Rigby, P.C. & Bird, C. (2013)"Convergent Contemporary Software Peer Review" — effective reviews target ~20 LOC. ACM DL
5

The Normal Case

Reading and steering in real-time as the LLM streams output
The window to intervene is short. As the LLM generates output, you skim the structure to check direction, then scan for red flags. If a pattern is spotted, you intervene early and redirect the prompt. The loop runs continuously — every intervention is cheaper than the next one you skip.
Skim
Structure → direction check
~10–20 seconds per block
Scan
Red flags → intervene early
pattern spotted? → redirect
Insight clave: LLM starts generating → skim the structure → scan for red flags → intervene early → redirect prompt → loop. The cost of correction compounds exponentially with each turn you skip.
Referencias
  • Paper Sadowski, C. et al. (2018)"Modern Code Review at Google" — median first response ~1 hour, review is continuous. ACM DL
  • Docs Google"Small CLs" Engineering Practices. google.github.io
6

The Extreme Case

Multiple agents, parallel outputs, exponentially more noise
Experimental Agent Teams Mode: multiple agents running in parallel, each generating output in their own panel. Bad patterns don't announce themselves — they accumulate silently across panels while you're watching a different one.
$ CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 claude
agent-1 [orchestrator]
> analyzing codebase... > spawning worker agents > task: refactor db layer > writing models/user.go > writing models/order.go
agent-2 [worker: db]
> received: refactor db > writing db/queries.go > creating 12 new files > 847 lines generated... > pattern: copy-paste loop
agent-3 [worker: api]
> received: api layer > creating handlers... > handleUser() { 120 lines } > handleOrder() { 118 lines } > 90% identical — not abstracted
Insight clave: In multi-agent mode, your working memory of 4–7 items is split across N agents. Parallelize your attention deliberately, not accidentally.
Referencias
  • Paper Han, T. et al. (2026)"SWE-Skills-Bench: Agent Skills in Real-World SE" PDF
  • Book Broadbent, D.E. (1958)"Perception and Communication" — Filter Theory: overload collapses focus entirely.
7

Recommendations

Hard-won advice from working with AI agents daily
01
Train your eye on bad patterns first. You can't scan for what you don't recognize. Study anti-patterns deliberately — repetition, God functions, naming drift — before you need to spot them at speed.
02
Set checkpoints, not just endpoints. Review agent output every 100–200 lines — not after 2,000. The earlier the catch, the smaller the revert.
03
In multi-agent mode, assign one focus per panel. Don't try to track 3 agents simultaneously — your working memory caps at 4–7 items total.
04
Trust your gut signal. If something feels "off" while skimming, stop and scan. Intuition is trained pattern recognition — it fires before your conscious mind catches up.
05
Redirect early and clearly. A precise correction now saves 10× the cost of refactoring after 500 more generated lines. The compound interest of bad patterns works against you fast.
Insight clave: Google's code review data shows ~24 LOC median per CL, with ~1 hour median first response. Small and fast — not big and thorough — is the pattern that works at scale (Sadowski et al. 2018).
Referencias
  • Paper Sadowski, C. et al. (2018)"Modern Code Review at Google" — ~24 LOC median, 1 reviewer norm. ACM DL
  • Paper Rigby, P.C. & Bird, C. (2013)"Convergent Contemporary Software Peer Review" — practices converge to ~20 LOC, 2 reviewers, <1 day turnaround. ACM DL
The Takeaway

Read less.
Detect more.

You'll never read every line an agent writes. But you can always catch the patterns that matter — if you know what to look for and how to look fast.