Teaching AI to Teach You Better

Most people use AI to get answers. That's useful, but it's only half of what's available. If you're using AI regularly for work, you're sitting on a source of feedback about your own thinking that most people never tap — a record of what you asked, how you framed it, and what that reveals about where your understanding is solid and where it isn't. This post is about how to use that.

The technique is simple. Instead of just asking AI questions and moving on, you ask it to pay attention to how you're asking — and at the end of a session, to tell you what that reveals about your understanding. Then you carry those insights forward as context for your next session. AI tools don't retain memory of you between sessions by default, though some tools and configurations are starting to change that. Either way, the value here isn't about working around a technical limitation — it's about developing a clearer picture of yourself as a learner. This isn't about training the model. What you can do is get better at describing your own context and specific learning needs, and put that to work.

The technique has three steps, and the sequence matters.

Before you start working, tell the AI what you're trying to learn and ask it to pay attention as you go. Something like: "I'm working on [topic]. At the end of our conversation, I'd like you to assess what my questions reveal about gaps in my understanding. What should I focus on to grasp the fundamentals?" You don't need to know what you don't know yet — that's the point. You're just setting up the observation.

Then work normally. Ask your questions, explore the problem, do the task. Don't perform for the AI or try to ask "smart" questions — the value of this technique comes from letting the AI observe how you actually think through a problem.

At the end of the session, ask the AI to reflect back what it noticed: "Based on our conversation today, what patterns do you notice in how I approached this topic? What instructions should I give you at the start of my next session to support my learning better?" The AI will identify themes in your questions — concepts you kept circling back to, foundational ideas you may have skipped over, assumptions you made without realizing it. You take that output, refine it into a short description of your context and specific learning needs, and bring it with you to your next session as your opening prompt.

Here's what this looks like in practice.

Say you're a policy analyst who has just been assigned to a file on a regulatory framework you've never worked with before. You open an AI session and start with something like: "I'm getting up to speed on [regulatory area]. At the end of our conversation, assess what my questions reveal about gaps in my understanding. What should I focus on to grasp the fundamentals?" Then you work through the file — asking about key definitions, how the framework is structured, how it interacts with related legislation. At the end, you ask the AI to reflect back what it noticed.

It might tell you something like: you have a reasonable grasp of the surface-level provisions but kept asking questions that suggest you're missing the policy intent behind them. You asked several times about exceptions without first establishing what the general rule was. You may want to start your next session by asking the AI to explain the framework's purpose and history before diving into its mechanics.

At that point, you can ask the AI to go one step further: "Based on what you've just told me, draft a short description of my context and learning needs that I can use to open my next session." You refine it, save it, and open your next session with it: "I'm continuing to work on [regulatory area]. I tend to focus on provisions before I've established policy intent — please flag when I'm doing this. Start by giving me a brief overview of the framework's purpose and history before we get into the details."

That next session will be more focused than the first. Not because the AI changed, but because you arrived with better context.

The same technique works just as well when you're learning a new tool rather than a new subject area. Say you've been asked to pick up PowerBI or Tableau to analyze program metrics. After a few sessions working through calculated fields, filters, and dashboard design, the AI might identify that you understand individual chart types but lack a mental model for how data relationships work underneath them. It might note that you're making design choices without thinking about your audience. You ask the AI to turn that into an opening prompt for your next session, and you arrive with that context already on the table: "I need to understand data relationships before I tackle visualization. When I ask questions, flag if I'm making design choices without considering end users. I need 'why' explanations, not just 'how.'"

The specific domain doesn't matter much. What matters is that you're not starting from zero each time.

What you're building here is more than a collection of useful prompts. Each session adds to a clearer picture of how you learn — where you tend to skip steps, what kinds of explanations land for you, which foundational concepts you're still shaky on. Over time, that picture becomes a genuine professional development asset: a description of your own learning needs that you can refine and reuse across different topics and tools.

This matters in a government context because formal training opportunities aren't always available when you need them. You might be assigned to a new file with two weeks to get up to speed, or asked to produce analysis using a tool you've never used before. The session debrief technique won't replace structured learning, but it can make self-directed learning considerably more systematic. Instead of working through a subject area by trial and error, you're actively tracking what you know, what you're missing, and what kind of support helps you move forward.

The other thing worth naming is that this discipline — getting precise about your own learning needs and articulating them clearly — stays valuable regardless of how AI tools evolve. Capabilities will change, and the specific prompts that work well today may need to be adapted as models improve. But knowing how you learn, and being able to describe that clearly, is a skill that transfers.

If you try this technique once, you'll learn something useful about how you think and what you need to learn effectively. If you use it consistently, you'll start to notice patterns in your own learning that you wouldn't have spotted otherwise — and your opening prompts will get sharper each time.

The core loop is the foundation. But once you have it working, there are a handful of techniques that extend it further — ways to catch your own assumptions mid-session, test whether you've actually understood something, and build a longer-term record of how your thinking on a topic has developed. That's what Post 2 covers.