Extending the Loop: More Ways to Learn with AI
The core technique in Post 1 is straightforward: set up the observation, do your work, ask the AI to reflect back what it noticed. Do that consistently and you'll build a description of your own learning needs that makes each session sharper than the last. That loop is worth getting comfortable with before adding anything else.
But once it's working, there are a handful of techniques that extend it — ways to catch your assumptions before they lead you astray, test whether you've actually understood something, and build a longer-term record of how your thinking on a topic develops over time. None of them require much extra effort, and they all work the same way the core loop does: by turning the AI's attention toward how you're thinking, not just what you're asking.
The first technique is what you might call an Assumption Auditor. Partway through a complex piece of work, pause and ask the AI: "What assumptions am I making that I should verify? What have I not asked about that might be important?" It's easy to go deep on a problem without noticing that you've taken something for granted early on. This check costs a few minutes and can save you from building on a shaky foundation.
The second is a Knowledge Gap Mapper, and it works best at the start of a session. Before diving into a topic, tell the AI what you already know and where you're uncertain: "I'm working on [X]. I understand [A and B] but I'm less clear on [C and D]. Flag when my questions suggest I'm missing something more foundational." This turns the AI into an early warning system rather than just an answer machine — it's watching for signs that you've skipped a step, not just responding to what you ask.
Reverse Tutoring flips the dynamic in a different direction. After working through something new, try explaining it back to the AI: "I'm going to describe how [concept] works. Identify any errors in my understanding and flag what I'm oversimplifying." This is a reliable way to find out whether you've actually understood something or just developed a surface familiarity with it. Passive consumption of AI explanations can feel like learning without producing much of it — this technique forces the difference into the open.
Competency Checkpointing is about making progress visible. After several sessions on a topic, bring back your early questions and ask: "A few weeks ago I was asking [these kinds of questions]. Based on what I'm asking now, what competencies have I developed? What's still unclear?" It's easy to lose track of how far you've come when you're in the middle of learning something, and it's equally easy to mistake familiarity for understanding. This check gives you a more honest read on both.
Session Archaeology is the technique that ties the others together. The idea is simple: keep a running document where you save the AI's observations from each session — what it noticed about your questions, where it flagged gaps, what it suggested you focus on next. Every few weeks, bring that document back to the AI and ask: "Here are my recent session notes. What meta-patterns do you see? What should my opening prompt include that it doesn't yet?"
In practice, this might mean noticing that across five sessions on a new policy area, you kept returning to questions about implementation without ever fully grounding yourself in the policy intent — a pattern no single session would have surfaced on its own. Or it might reveal that you consistently ask "how" before "why," and that your understanding tends to be procedural rather than conceptual. Those kinds of patterns are easy to miss in the moment — you're focused on the work, not on how you're approaching it. Having a written record of what the AI observed across multiple sessions gives you something concrete to look at, and makes it possible to spot trends that would otherwise stay invisible.
Over time, that document becomes something more than a prompt library. It's a map of how your thinking on a topic has developed — useful not just for shaping future AI sessions, but for your own professional development more broadly. If you're onboarding to a new file, it can help you get up to speed faster. If you're preparing for a conversation with a subject matter expert, it can help you identify the right questions to ask. The discipline of keeping it is low; the return compounds.
One thing worth acknowledging: AI capabilities are changing quickly, and some of what feels like a limitation today — the need to re-establish context at the start of each session, for example — may be less of a constraint as tools improve. That's fine. The techniques in this post don't depend on AI staying the way it is now.
What stays constant is the underlying discipline. Knowing how you learn, being able to articulate what kind of support helps you move forward, and having a record of how your thinking on a topic has developed — those are useful regardless of what tools you're using. The AI is doing the observing and reflecting, but the insight it produces is about you. That's the part that transfers.
If you haven't tried the core loop yet, start there. The extensions in this post are most useful once you have a feel for how the setup, work, and debrief sequence actually plays out in practice — layering them in before that can make the whole thing feel more complicated than it is.
Once the core loop feels natural, pick one technique from this post that fits the kind of work you're doing and try it for a few sessions. Session Archaeology is worth starting early — even a rough running document is better than nothing, and it gets more useful the longer you keep it.