Putting Personas to Work
You've built your personas. Here's what using them actually looks like.
The three scenarios below are based on real government projects. In each case, standard review processes checked the logic and missed the barriers. A 30-minute persona review caught what weeks of internal testing didn't.
Testing a New Online Application
Service BC was redesigning their BC Services Card renewal process. The new system looked streamlined and modern. Internal testing went smoothly.
Then someone asked how Margaret Chen would experience it.
Margaret is 73, lives in Victoria, uses her smartphone for most things but gets frustrated when technology doesn't behave predictably. She's willing to learn but needs clear instructions.
The team prompted their AI tool:
Review this BC Services Card renewal process as Margaret Chen. Margaret: 73, Victoria, smartphone user, no computer, gets frustrated with unpredictable tech, doesn't know technical terms like "cache" or "session timeout." Walk through each step and identify where Margaret would get stuck, confused, or give up. [Pasted the application flow]
The AI came back with four problems. The form timed out after five minutes of inactivity—Margaret reads carefully and would lose everything. The error message read "Session expired—clear your cache and try again," which she wouldn't know how to fix. There was no phone support, just a link to a 12-page FAQ. The photo upload failed with "file too large" but offered no way to resize.
They ran the same process with James Blackwater. He lives in Northern BC where internet is unreliable, works shifts, and can't always complete a form in one sitting. The application had no save-and-resume function, so a dropped connection meant starting over. The site copy promised users could "complete this quickly and conveniently from home"—which assumed reliable high-speed internet James doesn't have.
Before launch, the team extended the timeout to 15 minutes with a warning at 12, rewrote error messages in plain language, added a prominent phone support option, built in save-and-resume, added automatic image resizing, and created an offline PDF option for severe connectivity issues.
The service launched with an 85% completion rate, compared to 60% typical for similar services. The two-week delay to make these changes saved months of post-launch fixes.
Reviewing a Housing Policy
A ministry was drafting policy for affordable housing incentives. The economic analysis looked solid. Consultation with developers and municipalities was planned—but before that, the team ran it past their personas.
They started with Amira Hassan. Amira arrived from Syria 18 months earlier with two young children and is still building confidence in English. The policy used terms like "housing starts" and "density bonuses" without explanation. It referenced "affordable units" without defining affordable. The application required proof of income for the last two years and demonstrated housing need—documentation Amira couldn't provide after 18 months in Canada. Key materials weren't available in Arabic, and the English was highly technical.
The policy, it turned out, was only navigable by someone already fluent in housing policy language.
Then they looked at it through Dev Sharma's eyes. Dev runs a small business in Kelowna and uses a wheelchair. The policy mentioned "accessible housing units" but didn't define what that meant in practice—modern accessibility standards or minimum technical compliance? The information website included a video with no captions. The application form was a PDF that screen readers couldn't parse.
Accessibility had been treated as a checkbox. The information delivery itself wasn't accessible.
Before the cabinet briefing, the team made six revisions: they added a plain-language summary ("This program helps people earning less than $X/year find housing that costs less than Y% of their income"), clarified eligibility with specific income thresholds and documentation alternatives for newcomers, defined accessibility with links to specific building standards, created application paths by phone, in-person, and mail in addition to online, translated key documents into BC's five most common languages, and made all forms screen-reader compatible with captions added to videos.
The cabinet briefing included a substantive equity considerations section. The approach was flagged as an example of thorough policy development.
Evaluating Vendor Proposals
A ministry needed to select a vendor for a new public consultation platform. Three vendors submitted proposals. Standard evaluation covered technical specs, cost, and experience.
The team also ran each proposal through their personas.
Vendor A's platform looked polished but had small buttons and complex menus, required account creation with email verification, and offered a 45-minute video as the only getting-started guide. Margaret Chen wouldn't sit through that—and there was no phone or paper participation option.
Vendor B's interactive features required high-speed internet, and live sessions were scheduled during business hours. James Blackwater couldn't participate: wrong hours, insufficient bandwidth, no offline option.
Vendor C offered translation for written submissions, but videos were neither translated nor captioned. Discussion forums used language like "asynchronous engagement." Amira Hassan would struggle despite the translation feature.
Rather than eliminating vendors outright, the team built follow-up questions into the next evaluation stage: How would Margaret participate without creating an account? How would James engage with spotty internet during shift work? What support would Amira receive as a first-time user?
The selected vendor committed to specific accessibility features, offline participation options, and simplified onboarding. The contract included persona-based user testing as acceptance criteria—so "accessible" wasn't just a promise in the proposal.
The platform served diverse populations from launch instead of requiring costly retrofits after complaints arrived.
The pattern across all three scenarios is the same. Standard review checked technical correctness, policy logic, and cost. Personas revealed that technically correct solutions still created real barriers for real people.
Margaret, James, Amira, and Dev aren't edge cases—they represent substantial segments of BC's population. Catching these issues before launch took 30 to 60 minutes per review. Fixing them early took hours or days. Fixing them after launch would have taken weeks or months, and cost considerably more than the time invested upfront.
Try it with something you're working on right now. Pick one persona from Part 2 and spend 30 minutes looking at your work through their eyes.
Part 4 covers building your ministry's persona library, avoiding common pitfalls, and making persona review a standard part of your process.