Prompt, Wait, Scroll
Deen Khan
29 Jan 2026



Back to Insights
The Attack of the Luddites
In the 1700s handlooms demanded constant attention: the weaver’s feet worked the treadles and their hands the shuttle. Nothing happened on its own. The loom only moved because the weaver did. Power looms mechanised this process. Instead of human energy, it ran on gears and belts. The work became episodic: set the machine, watch it run, intervene when it fails, then wait, the weaving happened automatically (to cope with the wait, workers began clog dancing in rhythm to the pounding looms to stay alert). The human role shifted from doing the work to intervening when it went wrong.
Before the Gaps Arrived
The shape of work has historically looked like a waveform. We sit down, build into the task, reach a stretch of focus, then fade. There's a biological reason for this: our body’s ultradian rhythm cycles through periods of alertness and fatigue, roughly every ninety minutes. Good workers learn to shape the wave: shorten the ramp, stretch the peak, delay the decline.
A developer adding a feature doesn’t produce much in the first hour. They pull the repo and rebuild the system in their head (Paul Graham says you can't program well in units of hours, it’s barely enough time to get started). Then the context clicks. They know which files matter. Code comes quickly. But, after a while the edge fades. They reread the same lines. Small changes break things. That’s when they commit and stop.
Some problems only yield to immersion. Designing a system with many interacting constraints. Debugging behaviour that emerges from parts working together. These require holding more in your head than you can reload quickly. The insight doesn't arrive in the first ten minutes. It arrives after you've been inside the problem long enough to feel what's missing.


Senna at Monaco
That shape of work matters. The waveform lets our mind hold a stable context long enough to think. At the limit, we enter “flow”, where high skill meets high challenge (avoiding boredom (high skill, low challenge) or anxiety (low skill, high challenge)). Flow allows our brain to form richer neural connections. Concentration becomes effortless, time disappears, and quality improves. One of the clearest examples was Ayrton Senna’s Lap of the Gods at the 1988 Monaco Grand Prix. After finishing, he described it an almost out of body experience:
“And suddenly I realised that I was no longer driving the car consciously. I was driving it by a kind of instinct, only I was in a different dimension. It was like I was in a tunnel.”
Prompt, Wait, Drift
AI has changed the shape of work. It looks like a step function, not a waveform.
The same developer doesn’t build into the problem, they start by asking for an answer. They describe what they want, hit enter, and wait. The wait is dangerous. When nothing happens for a few seconds, our mind wanders. So, while the model works, they don’t – they scroll X, read the news. Progress happens offscreen and then code appears. They skim, spot an issue, describe the fix. Then, they wait. They check email, open another tab. A revision appears. They test it, find an edge case and describe that too. The pattern repeats: prompt, wait, review, prompt. Each step is a visible leap forward, separated by idle gaps filled with distraction.
The gaps are unpredictable. Sometimes the wait is twenty seconds. Sometimes two minutes. You never know which. Twenty seconds isn't long enough to do anything useful, but it's long enough to lose focus. Two minutes is long enough to check your phone - and then you're gone. If the waits were consistent, you could plan for them. But they're not. So, you drift.
Faster Overall
The step function creates idle time, but overall productivity is still rising. Developers using AI complete more tasks. Teams ship more features. A Google engineer recently admitted Claude Code generated a distributed system in sixty minutes that her team spent a year building. An Anthropic study of its conversations found an eighty percent reduction in the time it takes lawyers, developers and analysts to complete tasks.
The rhythm has become fragmented, but output is up. So what's the problem? Not all work needs the wave. Routine tasks, clear specs, low-judgement problems - the step function works fine for those. The danger is applying it to everything. Some work requires immersion. When you squeeze that into prompt-wait-review, the costs don't show up in the output metrics. They show up later, when something breaks.


Three Minutes Over the Atlantic
In 2009, Air France Flight 447 fell into the Atlantic. Ice had blocked the tubes that measure airspeed. When the autopilot saw conflicting data, it did what it was designed to do: disengage and hand back control.
The handoff came at a bad time. It was nighttime, over the ocean, in a storm. The captain was on break. The co-pilot was suddenly flying manually. He reflexively pulled back on the stick and stalled the plane. For three minutes the aircraft fell out of the sky. All the while the stall alarms blared, but the overwhelmed crew didn’t know what was happening.
Modern flying is now mostly supervision. Pilots engage autopilot shortly after take-off and monitor it until landing. It’s made aviation safer. But it’s also produced “the irony of automation”: the more reliable a system becomes, the less practice humans get and the longer we go without stimulation. When the system fails, it fails suddenly, and we’re expected to jump in and perform perfectly in a situation that we now rarely experience. Airlines now force pilots to practice manual flying and design the cockpit to keep them mentally engaged.
AI is pushing work in a similar direction. Creation will give way to curation. Much of the day becomes reviewing: skimming outputs, approving drafts, nudging the system forward. The danger is that we do less until it suddenly matters. When an edge case arrives, we’ll have to dive deep, fast and if we haven’t been practicing that kind of thinking, the failure mode won’t be gentle. The same developer who now prompts instead of codes has been shipping features for months. The AI handles implementation. They review, approve, ship. Then one night, production breaks. Users are losing data. The logs show a race condition. The kind of bug you only catch if you've written enough concurrent code to feel when something's off. Six months ago, they'd have recognised it immediately. But they haven't debugged manually in months. Now they're staring at the logs, knowing something's wrong, not knowing why.


Will We Shape It or Get Shaped?
The step function leaves us with dead time. The gaps between prompts are too short to do anything meaningful and too long to ignore. The question is what we do with them.
One option is to surrender. YC recently backed Chad Labs, which integrates brainrot directly into the IDE. They saw the gaps in the step function and chose to fill them with TikTok, gambling and Tinder. Their logic is nihilistic but honest: developers already fill these gaps with their phones. They just lose track of time and face friction switching back. By building distractions into the tool and killing it when the output arrives, they argue they’re actually reducing context switching.
A better approach is to redesign the wait. In the nineties, Manhattan building managers kept getting complaints that elevators were too slow. The engineering fix was expensive. A consultant suggested something cheaper: install mirrors. People checked their hair, adjusted their clothes, looked at each other, and the complaints stopped. The elevators weren’t any faster, they just felt shorter.
What if AI tools did the same? Instead of leaving you staring at a spinner, they could prompt you to predict what's coming. A simple text field: what do you expect the output to look like? The prediction keeps you in the problem during the wait. It also makes you accountable. When the output arrives, the prediction is still visible. If you predicted one thing and the output arrives differently, you notice the gap. Psychologists call this the testing effect - predicting before seeing improves understanding, even when you're wrong. The act of committing to a position changes how you process the answer. You're comparing, not just receiving.
But redesigning the wait only solves half the problem. Airlines mandate manual flying hours. Not because autopilot is worse - it's better. But pilots who never hand-fly lose the reflexes they'd need when it fails. The same applies here. You have to keep doing the work manually, even when AI does it better. Write a function before asking AI to optimise it. Debug by hand sometimes. Read AI-generated code line by line. If you can't explain why it works, you don't understand it well enough to ship. Manual work is slower. But the alternative is depending on a system you can no longer evaluate.
The weavers who adapted to power looms became supervisors (the weavers who didn’t became luddites). The ones who thrived didn't just watch – they stayed engaged with what they supervised, so that when something broke, they could feel it. The step function isn't going away. The question is whether you shape how you use it, or let it shape how you think.
The Attack of the Luddites
In the 1700s handlooms demanded constant attention: the weaver’s feet worked the treadles and their hands the shuttle. Nothing happened on its own. The loom only moved because the weaver did. Power looms mechanised this process. Instead of human energy, it ran on gears and belts. The work became episodic: set the machine, watch it run, intervene when it fails, then wait, the weaving happened automatically (to cope with the wait, workers began clog dancing in rhythm to the pounding looms to stay alert). The human role shifted from doing the work to intervening when it went wrong.
Before the Gaps Arrived
The shape of work has historically looked like a waveform. We sit down, build into the task, reach a stretch of focus, then fade. There's a biological reason for this: our body’s ultradian rhythm cycles through periods of alertness and fatigue, roughly every ninety minutes. Good workers learn to shape the wave: shorten the ramp, stretch the peak, delay the decline.
A developer adding a feature doesn’t produce much in the first hour. They pull the repo and rebuild the system in their head (Paul Graham says you can't program well in units of hours, it’s barely enough time to get started). Then the context clicks. They know which files matter. Code comes quickly. But, after a while the edge fades. They reread the same lines. Small changes break things. That’s when they commit and stop.
Some problems only yield to immersion. Designing a system with many interacting constraints. Debugging behaviour that emerges from parts working together. These require holding more in your head than you can reload quickly. The insight doesn't arrive in the first ten minutes. It arrives after you've been inside the problem long enough to feel what's missing.

Senna at Monaco
That shape of work matters. The waveform lets our mind hold a stable context long enough to think. At the limit, we enter “flow”, where high skill meets high challenge (avoiding boredom (high skill, low challenge) or anxiety (low skill, high challenge)). Flow allows our brain to form richer neural connections. Concentration becomes effortless, time disappears, and quality improves. One of the clearest examples was Ayrton Senna’s Lap of the Gods at the 1988 Monaco Grand Prix. After finishing, he described it an almost out of body experience:
“And suddenly I realised that I was no longer driving the car consciously. I was driving it by a kind of instinct, only I was in a different dimension. It was like I was in a tunnel.”
Prompt, Wait, Drift
AI has changed the shape of work. It looks like a step function, not a waveform.
The same developer doesn’t build into the problem, they start by asking for an answer. They describe what they want, hit enter, and wait. The wait is dangerous. When nothing happens for a few seconds, our mind wanders. So, while the model works, they don’t – they scroll X, read the news. Progress happens offscreen and then code appears. They skim, spot an issue, describe the fix. Then, they wait. They check email, open another tab. A revision appears. They test it, find an edge case and describe that too. The pattern repeats: prompt, wait, review, prompt. Each step is a visible leap forward, separated by idle gaps filled with distraction.
The gaps are unpredictable. Sometimes the wait is twenty seconds. Sometimes two minutes. You never know which. Twenty seconds isn't long enough to do anything useful, but it's long enough to lose focus. Two minutes is long enough to check your phone - and then you're gone. If the waits were consistent, you could plan for them. But they're not. So, you drift.
Faster Overall
The step function creates idle time, but overall productivity is still rising. Developers using AI complete more tasks. Teams ship more features. A Google engineer recently admitted Claude Code generated a distributed system in sixty minutes that her team spent a year building. An Anthropic study of its conversations found an eighty percent reduction in the time it takes lawyers, developers and analysts to complete tasks.
The rhythm has become fragmented, but output is up. So what's the problem? Not all work needs the wave. Routine tasks, clear specs, low-judgement problems - the step function works fine for those. The danger is applying it to everything. Some work requires immersion. When you squeeze that into prompt-wait-review, the costs don't show up in the output metrics. They show up later, when something breaks.

Three Minutes Over the Atlantic
In 2009, Air France Flight 447 fell into the Atlantic. Ice had blocked the tubes that measure airspeed. When the autopilot saw conflicting data, it did what it was designed to do: disengage and hand back control.
The handoff came at a bad time. It was nighttime, over the ocean, in a storm. The captain was on break. The co-pilot was suddenly flying manually. He reflexively pulled back on the stick and stalled the plane. For three minutes the aircraft fell out of the sky. All the while the stall alarms blared, but the overwhelmed crew didn’t know what was happening.
Modern flying is now mostly supervision. Pilots engage autopilot shortly after take-off and monitor it until landing. It’s made aviation safer. But it’s also produced “the irony of automation”: the more reliable a system becomes, the less practice humans get and the longer we go without stimulation. When the system fails, it fails suddenly, and we’re expected to jump in and perform perfectly in a situation that we now rarely experience. Airlines now force pilots to practice manual flying and design the cockpit to keep them mentally engaged.
AI is pushing work in a similar direction. Creation will give way to curation. Much of the day becomes reviewing: skimming outputs, approving drafts, nudging the system forward. The danger is that we do less until it suddenly matters. When an edge case arrives, we’ll have to dive deep, fast and if we haven’t been practicing that kind of thinking, the failure mode won’t be gentle. The same developer who now prompts instead of codes has been shipping features for months. The AI handles implementation. They review, approve, ship. Then one night, production breaks. Users are losing data. The logs show a race condition. The kind of bug you only catch if you've written enough concurrent code to feel when something's off. Six months ago, they'd have recognised it immediately. But they haven't debugged manually in months. Now they're staring at the logs, knowing something's wrong, not knowing why.

Will We Shape It or Get Shaped?
The step function leaves us with dead time. The gaps between prompts are too short to do anything meaningful and too long to ignore. The question is what we do with them.
One option is to surrender. YC recently backed Chad Labs, which integrates brainrot directly into the IDE. They saw the gaps in the step function and chose to fill them with TikTok, gambling and Tinder. Their logic is nihilistic but honest: developers already fill these gaps with their phones. They just lose track of time and face friction switching back. By building distractions into the tool and killing it when the output arrives, they argue they’re actually reducing context switching.
A better approach is to redesign the wait. In the nineties, Manhattan building managers kept getting complaints that elevators were too slow. The engineering fix was expensive. A consultant suggested something cheaper: install mirrors. People checked their hair, adjusted their clothes, looked at each other, and the complaints stopped. The elevators weren’t any faster, they just felt shorter.
What if AI tools did the same? Instead of leaving you staring at a spinner, they could prompt you to predict what's coming. A simple text field: what do you expect the output to look like? The prediction keeps you in the problem during the wait. It also makes you accountable. When the output arrives, the prediction is still visible. If you predicted one thing and the output arrives differently, you notice the gap. Psychologists call this the testing effect - predicting before seeing improves understanding, even when you're wrong. The act of committing to a position changes how you process the answer. You're comparing, not just receiving.
But redesigning the wait only solves half the problem. Airlines mandate manual flying hours. Not because autopilot is worse - it's better. But pilots who never hand-fly lose the reflexes they'd need when it fails. The same applies here. You have to keep doing the work manually, even when AI does it better. Write a function before asking AI to optimise it. Debug by hand sometimes. Read AI-generated code line by line. If you can't explain why it works, you don't understand it well enough to ship. Manual work is slower. But the alternative is depending on a system you can no longer evaluate.
The weavers who adapted to power looms became supervisors (the weavers who didn’t became luddites). The ones who thrived didn't just watch – they stayed engaged with what they supervised, so that when something broke, they could feel it. The step function isn't going away. The question is whether you shape how you use it, or let it shape how you think.