Build First, Understand Later: The New Workflow AI Makes Possible
Red-lining your cognitive limits
I recently came across this blog post which argues AI coding assistants provide “little value” because a programmer’s job is to think. The author dissects a small bit of javascript code and points out that beyond just the surface there is a lot of hidden complexity and latent assumptions. This hidden world contrasts with the relatively straightforward surface world of the few lines of code. AI, the author-mostly correctly-argues, can only see the surface symbols and is therefore shut off from this deeper latent world of hidden meaning. Therefore, AI is of marginal or even negative value because it’s making the act of programming thoughtless.
The author of this blog post commits what I consider to be a basic fallacy which views AI as intelligence replacement rather than intelligence amplification. The right way to use AI is as a multiplier of your own ideas. You should be using AI to help you think about things you wouldn’t be thinking about, or to think about things at a higher more strategic level. Generative AI can either be a cheating engine, to cut corners or skip steps, or it can be a very powerful learning tool. If you use it like a cheating engine, it will yield diminishing returns, and I suspect, turn out a lot like how that blog post describes it. Used as a learning engine it’s possible to propel yourself to new heights. Indeed, I believe it’s possible to use genAI to basically perform as a “Universal Developer” who can work with any language, tool, or problem with minimal background familiarity, leveraging only high-level computer science knowledge as the one invariant across any given problem space.
I have dedicated a significant amount of time over the last several years discussing AI’s proper place in software engineering and in digital productivity generally. And although I’m sure there are better and worse ways of using it, documenting what works and what doesn’t is a bit like bottling lightning. Nevertheless, I’ve made some progress. What I’ve come to realize is that genAI betokens a whole new way of thinking that is still quite unfamiliar and uncomfortable to many.
To get the most out of AI as a coder or other knowledge worker generally, I’ve found that you need to operate at your cognitive “red-line”, that is, right at the very limit and perhaps slightly beyond your skill level, knowledge, and experience. “Redline” refers to the maximum engine speed (RPM) that is considered safe by a car manufacturer. If your goal was to move as fast as possible, it would be kind of high-risk/high-reward strategy where to flirt with that limit. If you exceed the red line, however, your engine might stall out or you might lose control of the car. (And you’d probably get a speeding ticket…let’s not focus too much on this part of the analogy.)
What genAI enables you to do is work with tools or languages or whatever that you don’t yet fully understand just by requesting it, to build things that you still don’t quite understand. So you are already over-thresholding your cognitive limit at that point of time. This feeling of being in over your head, of working with the AI throwing dozens of new concepts and unfamiliar terms at you (while code piles up) is where you want to be provided that you always follow up and solidify a knowledge of what’s being built.
Unlike a car engine’s red line which is fixed, our cognitive red lines are adaptive. So long as you are throwing yourself into the thick of it, you are almost never deprived of new opportunities for learning and thinking with AI. The more you operate near your cognitive red line the more it shifts, until yesterday’s red line was not as high as today’s as you grow. Eventually you must circle back and understand the systems that have been created, gradually “bringing them under control of” your growing understanding. Then you can intrude more of your own improvements and over time make the project more your own. This process, I have discovered, is a virtuous cycle that can lead to rapid and substantial growth.
AI-driven development removes the one speed bump to development, which is memorizing the low-level implementation details. Knowledge of that is the only thing it forces you to sacrifice. But you’ll be so busy thinking about the bigger picture it won’t matter.
A New Work Flow: Build First, Understand Later
In the old bygone days developers could only build things that flowed directly from their understanding and it was much more more difficult to black box much of it. Now genAI enables a build-first, understand-later workflow. So when you go big with AI, and are operating beyond your naked capacity, it gives you a lot to learn and think about, provided that you dig in and push yourself to understand more. We’re entering an age where we can build things we don’t yet fully understand, which means we’re sort of working on a borrowed time, always somewhat exposed to uncertainty and over-extended, but also thrust into a heady atmosphere where we can pick up the pieces and catch up over a period of time. In the not too distant future, it’s easy to imagine most of what we build being built this way. With no one understanding it until after it’s largely already been built.
In reality, you’d want to understand a bit, then build, then understand more later. It’s generally best practice to delay asking for code for as long as possible, instead building up a plan and understanding of the problem space with the AI before you execute.
Let’s illustrate the point. Say I discover a fascinating research paper. The paper is using math I’m not expert in, but I can piece together enough to know that the concepts would be relevant to a problem I am trying to solve. I upload the paper and have an LLM parse it for me. Often, a SOTA LLM can translate the concepts to a usable prototype. Before long, my hypothesis is either supported or disconfirmed and I’m cooking with gas.
All of this happens within the space of minutes. Notice that I didn’t take six years to get a phd to understand the paper completely on my own. Did the AI deprive me of a thinking or learning opportunity? Not really, because in a world without it, I simply would have not worked with that paper. Instead I am thinking and working with new possibilities unlocked by the AI.
(It’s sort of analogous to what I think are weak arguments against AI from artists. No artists was going to come by and “ghiblify” you and your friends drinking at the bar. No one is being deprived of that opportunity. The possibility simply didn’t exist.)
Some thinking and learning is definitely going on here, it’s just directed at a higher level of abstraction. To use a somewhat gross but apt analogy, the AI serves as a kind of premasticating mother bird, chewing up advanced concepts so that they become accessible and usable to us “chicks”-the learners who would otherwise not have the stomachs to digest it raw. My understanding will be broken up and extended over a longer period of time. Instead of “gating” productivity behind my full and perfect understanding of the paper, I can begin with a hazy, imperfect knowledge that I progressively improve over time in a more nonlinear and extended fashion.
AI Software Development as Scholarship
I believe there’s a whole art form to working with AI this way. I’ve come to view coding more as scholarship. I might be writing less code, and the code I do write tends to be more surgical, I’m thinking about it probably even more than I would when I’d write it all. The truth is it’s hard to think and code at the same time, that is to say, it’s hard to to think about anything other than low-level memorization details when working with something new. And since most developers are specialists of one sort or another, most of the software world is new to them, which tends to mean that without AI, people were more prone to being parochial and sticking to what they could already do. Those days are gone, however, and I think the future belongs to those who can use the new tools to go beyond their limits to create new things of value.
It’s true that the one side-effect of this approach is that you do lose some low-level implementation experience, but in my opinion that part sucks and always sucked. Developers used to have to expend their limited budgets of mental energy on “dot this, backslash that”, syntactic chores and muscle memorization, and while something is gained by working in the low level implementation details, something is lost also. Only very experienced, super-talented developers could dedicate full attention to low-level implementation details and keep a big picture systems thinking view in mind. AI renegotiates the distribution of mental energy here in ways that I find favorable. Low level implementation knowledge and muscle memory simply matters less now.
Judging from my perusals of HackerNews, what I just shared isn’t a common perception among developers. I suspect the reason for that is that, much like the population at large, most developers are still in the grips of a “replacement mentality” with AI and have not figured out how to evolve with the technology. Or they’ve been miseducated by poorly articulated trends like vibe coding and so they’ve come to view working with AI as “so I just stop thinking and ask the AI for stuff?”
Others say that while AI can generate superficially working code, it can’t generate certified production grade code. Maybe it can’t on its own, but with my methods, I’m finding that it can and does if you think with it. This scholarly, “build-first, understand later” workflow I am promoting, where you generate a prototype you don’t yet fully understand but then progressively bring it under control of your knowledge, is absolutely amenable to the kind of iterative design required to bring an early prototype to production readiness. Just stick with it and grow your understanding, gradually making it your “own.”
When many people describe the experience of “vibe coding” they typically define in managerial terms. The developer has been reduced to supervisor, bystander or auditor. I view it more as scholarship. This “hyper-research” isn’t perfect. It’s lossy, you are going to build with things you don’t yet fully understand, and important details may slip through the cracks. The expectation is however that you will continuously circle back until you have refined your understanding to sufficiency. If you over-extend yourself and get lost, it’s usually very easy to start over from an improved understanding. You won’t feel the sting of wasted work, because the AI has taken care of much of the most painful implementation-heavy parts.
Comparing the Two Approaches
Let’s compare the two work styles, starting with the more traditional Understand First, Build Later
Understand First, Build Later:
+ Flows from your established mental models
+ You can therefore be as confident in the initial build as you are your own knowledge
+ Likely to be more systematic, feels “comfortable” and in control
+ Scales linearly with experience level
- Tends to lock you into doing best what you have already done well, path dependent, overspecialization
- Risks playing it safe; staying inside comfort zones
- Slower development (especially when doing anything new or challenging)
- Narrower pool of perspectives; you risk propagating your own misconceptions
- Less experienced developers struggle more up-front
- Mental energy burned on low-level implementation details
Build First, Understand Later
+ Allows you to build anything the AI can do—superhuman breadth
+ Draw from vast sample space of approaches, not just the ones you’ve memorized
+ Extremely fast scaffolding and drafting
+ Access to world’s most advanced rubber ducking
- Dependency: the AI is an iffy source of truth and can serve as a crutch if not vigilant
- Reliability: AI may send you on red herrings, hallucinate or reward hack, wasting your time and giving you false confidence
- Decay of lower level implementation knowledge and syntactic muscle memory
- Can feel disorienting, or risk-exposing
- Risk of unvalidated, rushed, or unvalidated work, blind spot prone.
Nothing’s perfect, and there’s no replacement for hard work and effort. In some ways, doing things the “new way” is harder or just as effortful as the old way. On the bright side, I do think AI has saved developers from a boring and ultimately vulnerable fate of overspecialization. Memorizing microscopic details is less important now and you are no longer fused at the hip to whatever particular set of microscopic details you happened to have memorized. More broad and general computer science and software architecture knowledge is more valuable and useful than ever.
This new AI age is both the best of times and the worst of times. It’s the worst of times in the sense that most of that rote work on which easy, dependable jobs depended is now gone and won’t be coming back. It’s the best of times in that for those who want to engage more ambitiously with programming, they can accelerate and multiply their learning and productivity in all sorts of ways.