Posts

Extending the Loop: More Ways to Learn with AI

The core technique in Post 1 is straightforward: set up the observation, do your work, ask the AI to reflect back what it noticed. Do that consistently and you'll build a description of your own learning needs that makes each session sharper than the last. That loop is worth getting comfortable with before adding anything else. But once it's working, there are a handful of techniques that extend it — ways to catch your assumptions before they lead you astray, test whether you've actually understood something, and build a longer-term record of how your thinking on a topic develops over time. None of them require much extra effort, and they all work the same way the core loop does: by turning the AI's attention toward how you're thinking, not just what you're asking. The first technique is what you might call an Assumption Auditor . Partway through a complex piece of work, pause and ask the AI: "What assumptions am I making that I should verify? What have I ...

Teaching AI to Teach You Better

Most people use AI to get answers. That's useful, but it's only half of what's available. If you're using AI regularly for work, you're sitting on a source of feedback about your own thinking that most people never tap — a record of what you asked, how you framed it, and what that reveals about where your understanding is solid and where it isn't. This post is about how to use that. The technique is simple. Instead of just asking AI questions and moving on, you ask it to pay attention to how you're asking — and at the end of a session, to tell you what that reveals about your understanding. Then you carry those insights forward as context for your next session. AI tools don't retain memory of you between sessions by default, though some tools and configurations are starting to change that. Either way, the value here isn't about working around a technical limitation — it's about developing a clearer picture of yourself as a learner. This isn't...

Who Did We Actually Ask? Representativeness Explained

Who Did We Actually Ask? A program area receives survey results. The report says: 73% of people in B.C. support this policy direction. It sounds like evidence. It lands in a ministerial briefing, gets cited in committee, and shapes how the next budget is framed. Nobody in that chain of handoffs — from panel provider to consulting firm to program area — is set up to ask who those people were, or how they ended up in the survey. That question might seem procedural. It isn't. It's where the credibility of the finding rests. Most government program-area surveys use commercial research panels: pools of people who signed up to take surveys in exchange for small rewards. Nobody randomly selected them from the provincial population. They opted in. That single fact — how participants were chosen — determines whether calling them "people in B.C." is accurate or whether it's a claim the research can't support. There's also a structural reason this ...

Making Persona Review Part of Your Process

You've seen how personas work. You've used them to catch problems in your own work. The harder question is how to make this a regular practice instead of a one-time exercise. Start small. Three to five personas representing populations your ministry actually serves — or populations who struggle with your current services — is enough to start. Resist the urge to cover every possible BC resident before you've used any personas at all. In the first month, build your core personas from data you already have: service usage stats, consultation records, demographic information, feedback from frontline staff. Focus on diversity of access needs rather than demographic variety — a persona representing someone without reliable internet reveals different barriers than one representing someone without English fluency, even if both are "underserved." In months two and three, put those personas to work on existing documents and services. Note which insights were most valuable. R...

Putting Personas to Work

You've built your personas. Here's what using them actually looks like. The three scenarios below are based on real government projects. In each case, standard review processes checked the logic and missed the barriers. A 30-minute persona review caught what weeks of internal testing didn't. Testing a New Online Application Service BC was redesigning their BC Services Card renewal process. The new system looked streamlined and modern. Internal testing went smoothly. Then someone asked how Margaret Chen would experience it. Margaret is 73, lives in Victoria, uses her smartphone for most things but gets frustrated when technology doesn't behave predictably. She's willing to learn but needs clear instructions. The team prompted their AI tool: Review this BC Services Card renewal process as Margaret Chen. Margaret: 73, Victoria, smartphone user, no computer, gets frustrated with unpredictable tech, doesn't know technical terms like "cache" or "sessio...

Numbers You Can Defend - Gov Analyst's Verification Checklist

We've covered why the AI makes errors ( Post 1 ), how to prevent them with better prompts ( Post 2 ), and the six most common error patterns ( Post 3 ). All of that knowledge is useful. None of it helps when you're under deadline pressure and skipping steps anyway. That's what this checklist is for. It's a verification routine you can follow for any AI-assisted quantitative analysis — the systematic checks that make your numbers defensible before they land on your director's desk. Pre-Analysis: Before You Start Before you ask the AI for any instructions, get four things clear. First, ask the AI to identify your actual column names. Don't let it assume — make it tell you what it sees. Second, confirm the data types for your key columns. Is "ResponseDate" actually formatted as a date, or is it text that looks like a date? Third, identify data quality issues upfront. Blanks, outliers, inconsistencies — find them now, not after you've built you...

Six Ways AI Gets Your Numbers Wrong (And How to Catch Them)

Even when you use the "Test First, Implement Second" technique from Post 2 , AI can still make subtle errors. These errors follow predictable patterns. I’ve spotted six error patterns that show up again and again. Once you recognize them, you’ll catch problems before they make it into your reports or briefing notes. This post covers the six most common error patterns I've seen in government analytical work—how they show up and what they cost you. Here are the six patterns: The Invisible Filter Problem – AI filters data then forgets that filter exists in later calculations Column Name Hallucination – AI assumes common column names instead of using actual headers Denominator Drift – Percentage calculations use the wrong baseline after data transformations Aggregation Level Mismatch – Instructions jump between data levels (regional to provincial) without proper grouping steps Formula Propagation Errors – Formulas don't adjust correctly when copied down through datas...