Posts

Making Persona Review Part of Your Process

You've seen how personas work. You've used them to catch problems in your own work. The harder question is how to make this a regular practice instead of a one-time exercise. Start small. Three to five personas representing populations your ministry actually serves — or populations who struggle with your current services — is enough to start. Resist the urge to cover every possible BC resident before you've used any personas at all. In the first month, build your core personas from data you already have: service usage stats, consultation records, demographic information, feedback from frontline staff. Focus on diversity of access needs rather than demographic variety — a persona representing someone without reliable internet reveals different barriers than one representing someone without English fluency, even if both are "underserved." In months two and three, put those personas to work on existing documents and services. Note which insights were most valuable. R...

Putting Personas to Work

You've built your personas. Here's what using them actually looks like. The three scenarios below are based on real government projects. In each case, standard review processes checked the logic and missed the barriers. A 30-minute persona review caught what weeks of internal testing didn't. Testing a New Online Application Service BC was redesigning their BC Services Card renewal process. The new system looked streamlined and modern. Internal testing went smoothly. Then someone asked how Margaret Chen would experience it. Margaret is 73, lives in Victoria, uses her smartphone for most things but gets frustrated when technology doesn't behave predictably. She's willing to learn but needs clear instructions. The team prompted their AI tool: Review this BC Services Card renewal process as Margaret Chen. Margaret: 73, Victoria, smartphone user, no computer, gets frustrated with unpredictable tech, doesn't know technical terms like "cache" or "sessio...

Numbers You Can Defend - Gov Analyst's Verification Checklist

We've covered why the AI makes errors ( Post 1 ), how to prevent them with better prompts ( Post 2 ), and the six most common error patterns ( Post 3 ). All of that knowledge is useful. None of it helps when you're under deadline pressure and skipping steps anyway. That's what this checklist is for. It's a verification routine you can follow for any AI-assisted quantitative analysis — the systematic checks that make your numbers defensible before they land on your director's desk. Pre-Analysis: Before You Start Before you ask the AI for any instructions, get four things clear. First, ask the AI to identify your actual column names. Don't let it assume — make it tell you what it sees. Second, confirm the data types for your key columns. Is "ResponseDate" actually formatted as a date, or is it text that looks like a date? Third, identify data quality issues upfront. Blanks, outliers, inconsistencies — find them now, not after you've built you...

Six Ways AI Gets Your Numbers Wrong (And How to Catch Them)

Even when you use the "Test First, Implement Second" technique from Post 2 , AI can still make subtle errors. These errors follow predictable patterns. I’ve spotted six error patterns that show up again and again. Once you recognize them, you’ll catch problems before they make it into your reports or briefing notes. This post covers the six most common error patterns I've seen in government analytical work—how they show up and what they cost you. Here are the six patterns: The Invisible Filter Problem – AI filters data then forgets that filter exists in later calculations Column Name Hallucination – AI assumes common column names instead of using actual headers Denominator Drift – Percentage calculations use the wrong baseline after data transformations Aggregation Level Mismatch – Instructions jump between data levels (regional to provincial) without proper grouping steps Formula Propagation Errors – Formulas don't adjust correctly when copied down through datas...

Building Personas That Actually Help

You don't need a psychology degree or expensive research budget to build useful personas. You need real data about BC's population and about 30 minutes to think through who struggles with your work. What Makes a Good Persona Part 1 showed why personas matter—they help you catch barriers before launch, not after complaints arrive. A useful persona has five components. You need demographics—age, location, family situation, employment. You need context: digital access, language comfort, mobility, income level. You capture what they're trying to accomplish with government services and what obstacles they face. You include a quote that captures their perspective. That's it. You're not writing a novel. You're creating a reference point that helps you spot problems. Four BC Personas You Can Use Right Now Margaret Chen is a 73-year-old retiree living alone in Victoria on a fixed income. She has a smartphone but no computer, gets uncertain about new technology, th...

The Test First, Implement Second Technique

In the last post , we saw how AI can give plausible-sounding instructions that contain hidden errors.  These include filtered data that later steps ignore, column names that don't exist in your file, denominators that don't match your actual dataset, etc. The traditional approach is to ask AI for instructions, then verify the results after you've implemented everything. But by then, you've already invested time following potentially flawed steps. The trick is to make AI verify its instructions before you implement anything. Instead of asking "How do I analyze this data?" tell it to "Test your proposed approach on my actual data, then give me verified instructions." This shifts AI from pattern-matching mode into verification mode . You're forcing it to ground its instructions in your specific file structure before it tells you what to do. The technique forces AI to examine your file structure, validate its proposed approach against your real data...