Who Did We Actually Ask? Representativeness Explained

Who Did We Actually Ask?

A program area receives survey results. The report says: 73% of people in B.C. support this policy direction. It sounds like evidence. It lands in a ministerial briefing, gets cited in committee, and shapes how the next budget is framed.

Nobody in that chain of handoffs — from panel provider to consulting firm to program area — is set up to ask who those people were, or how they ended up in the survey. That question might seem procedural. It isn't. It's where the credibility of the finding rests.

Most government program-area surveys use commercial research panels: pools of people who signed up to take surveys in exchange for small rewards. Nobody randomly selected them from the provincial population. They opted in. That single fact — how participants were chosen — determines whether calling them "people in B.C." is accurate or whether it's a claim the research can't support.

There's also a structural reason this question doesn't tend to surface. The procurement chain that produces most government survey research — program area to consulting firm to commercial panel provider — isn't designed to ask it. Each party delivers what the next expects: confident, actionable findings in the language of the commercial research industry. A finding stripped of its qualifier is easier to cite, easier to summarize, and easier to sell upward. Precision requires more words and invites more questions. But when that finding drives real program and budget decisions, overclaiming it carries costs — and they tend to surface at the worst possible moment.

The word "representative" is at the centre of this. In ordinary conversation it sounds like a broad sample: respondents from across the province, different age groups, different backgrounds. In survey methodology it means something more specific and more demanding. A sample is representative when every member of the target population had a known chance of being selected — not just a theoretical one, but a real, calculable probability built into the design. This is not about geographic spread. It is not about demographic diversity. It is not about sample size. It is about the mechanism of selection.

You could survey 5,000 people drawn from every corner of B.C. and still have a non-representative sample if those people self-selected into a panel and volunteered to participate. Breadth of coverage and representativeness aren't the same thing. One describes who you reached. The other describes how you reached them — and that's what determines whether findings can speak to the broader population.

Most commercial panels are non-probability samples. That doesn't make them useless. They can provide real directional insight with genuine value for early-stage policy thinking and program design. But they can't support the claim that results generalize to all people in B.C. The method and the language need to match.

A related misconception is that hitting demographic quotas makes a sample representative. It doesn't. Quotas ensure you have enough respondents in specific groups to analyze them separately — say, enough respondents from northern B.C. to report on that region without folding it into provincial totals. When a survey falls short of its quotas, it loses the ability to say much about those groups with any confidence — not because it became less representative, but because there simply aren't enough respondents from those groups to draw on.

The people filling those quotas are still self-selected panel members. Reaching a target count of northern B.C. respondents means you have enough to analyze — not that those respondents accurately reflect northern B.C. residents generally. Calling a survey "less representative" when it misses quotas conflates two different problems. The survey wasn't representative to begin with, in the technical sense. A survey that meets all its quotas is a more analytically complete non-probability sample. It isn't a different kind of sample.

It would be convenient to treat this as individual carelessness. It isn't. "People in B.C. believe X" is standard practice in commercial research reporting. By the time it arrives in a government report, it reflects the conventions of an industry shaped for clients who want confident, actionable findings — not the higher accountability standard that government reporting requires. Nobody along that chain is necessarily acting in bad faith. The system isn't set up to apply a different standard.

Changing this means more than asking individual researchers to be more careful. Procurement specifications could define what reporting language is appropriate. Program areas and policy advisors could learn enough to ask better questions when findings arrive. Treating accurate language as a professional standard — not as an optional qualifier added out of caution — is something the research function could model, and doing so protects credibility when methodology comes under scrutiny.

"People in B.C. who were surveyed believe X" is accurate. It tells readers what they're looking at: the views of people who participated in this survey, which may or may not reflect the broader public. It makes no claim the method can't support. Changing a phrase in a report template costs nothing. Defending an overclaimed finding in a public forum costs considerably more. Getting the language right isn't modesty about what surveys can tell us — it's honesty about what they can't. Surveys conducted on commercial opt-in panels are a legitimate research tool. They deserve to be reported accurately.


A note on language: this essay uses "people in B.C." rather than "British Columbians" throughout. Not everyone who lives in British Columbia identifies as a British Columbian — a distinction that matters particularly for Indigenous peoples, whose relationship to colonial place-names is not uniform. The BC Government's Content Design team recommends this phrasing for the same reason. It seemed worth applying here, in an essay that is, after all, about saying what we actually mean.