Canadian Legal Problems Survey (CLPS)

Detailed information for 2021

Status:

Active

Frequency:

One Time

Record number:

5337

The purpose of the Canadian Legal Problems Survey (CLPS) is to identify the kinds of serious problems people face, how they attempt to resolve them, and how these experiences may impact their lives. The information collected will be used to better understand the various methods people use to resolve problems - not just formal systems such as courts and tribunals, but also informal channels such as self-help strategies.

Data release - January 18, 2022

Description

The Canadian Legal Problems Survey (CLPS) collects information on serious disputes or problems, which may or may not require legal help, Canadians have encountered and the impacts on their lives. Topics covered in the survey include the identification of the types of serious problems experienced, the relationship between those problems, actions taken to resolve or try to resolve the problems, access to legal help, costs associated with the legal problems, the level of understanding of the legal implications of the problems, the evolution and status of the problems, the impacts of the problems on their life including health, family and work.

The survey aims to gather information that will help governments better understand the characteristics and mechanisms involved in those situations, evaluate Canadian's access to legal help and costs associated with legal issues. The survey results will inform the development of tools and measures to support Canadians experiencing legal issues and will be used in the evaluation of federal contributions to civil legal aid. In addition, the information will be used to inform and develop programs to address Canadians' legal needs and problems, such as supporting community justice centres, enhancing legal literacy and other people-centred approaches to access to justice.

Reference period: 2021 and "In the past 3 years"

Subjects

  • Crime and justice
  • Justice issues

Data sources and methodology

Target population

The survey target population includes individuals 18 years of age or older living in one of Canada's 10 provinces, with the exception of individuals living in a collective dwelling, an institution or on an Indian reserve.

Instrument design

The content for the Canadian Legal Problems Survey electronic questionnaire was developed in collaboration with the Department of Justice Canada.

The questionnaire underwent qualitative testing in both official languages, conducted by Statistics Canada's Questionnaire Design Resource Centre.

Sampling

This is a sample survey with a cross-sectional design.

Sampling unit
The survey frame was a person-based list frame, constructed using the 2016 Longform Census and other administrative files. The sample was made of individuals randomly selected from the frame. Those specific individuals were invited to participate in the survey.

Sampling and sub-sampling
A sample of 42,400 people was randomly selected from the survey frame. The sample consists of a representative sample of 29,972 people from the general population as well as an oversample of 12,428 Indigenous Peoples.

Stratification method
The CLPS frame was stratified by province, and by Indigenous / non-Indigenous status. The Indigenous strata were sub-stratified by Indigenous identity (First Nations, Métis and Inuit) in order to improve the quality of estimates by Indigenous identity.

Data sources

Data collection for this reference period: 2021-02-01 to 2021-08-20

Responding to this survey is voluntary.

Data were collected directly from survey respondents either through an electronic questionnaire (EQ) or through computer assisted telephone interviewing (CATI) in both official languages.

Survey invitations were sent by mail to people in the survey sample along with a brochure. The unique Statistics Canada (StatCan) survey access code (SAC) was provided in the invitation. The targeted respondents could log in to the StatCan web portal with their SAC to fill out their questionnaire.

Four reminder letters were sent throughout the course of data collection. StatCan conducted follow up phone calls to non-responders in the sampling frame to increase response rates. This was to provide participants with the opportunity to complete the survey over the phone with a trained StatCan interviewer.

View the Questionnaire(s) and reporting guide(s) .

Error detection

Error detection was done throughout the survey process.

The survey application was developed and tested thoroughly before collection. Edits were programmed into the self-response electronic questionnaire (r-EQ), as well as into the collection management system (CMP) that was used to conduct interviews by telephone.

The data capture programs in the survey application allow a valid range of codes for each question and built-in edits, and automatically follow the flow of the questionnaire.

Once the questionnaires are submitted and the responses are added to the database, the same edits as the collections systems were performed as well as additional detailed edits. Any duplicate cases were verified and resolved.
The processing team used the Social Survey Processing Environment (SSPE) set of generalized processing steps and utilities to allow the completion of the processing of the survey in a timely fashion and with high quality outputs.

It used a structured environment to monitor the processing of data ensuring best practices and harmonized business processes were followed.

Edits were performed automatically and manually at various stages of processing at macro and micro levels. They included validity, consistency and flow edits. A series of checks was done to ensure the consistency of survey data. An example was to ensure respondents who reported that they received a reimbursement did not report a reimbursement amount of $0. Flow edits were used to ensure respondents followed the correct path and fix off-path situations.

The flow edits carried out ensured that the data set followed the flow of questions in the questionnaire, using a 'top-down' strategy.

A "Valid Skip" code is used to indicate that a question was not asked to a respondent based on an answer to a previous question. For example, if a respondent answered "No" to the question that asks if they had a problem with a large purchase or service, follow-up questions pertaining to that type of problem should be skipped automatically by the application. In processing, all of these follow up questions would receive a "Valid skip" value (6, 96, 996, etc.).

A "Not Stated" code is used to indicate that the question was shown to the respondent and they did not respond. It differs from a "Valid Skip" in that the question was asked or shown to the respondent and they did not respond, rather than the question not being shown the question at all. In addition, if no answer was provided for a question, the subsequent follow up questions are also set to "Not stated". For example, if a respondent did not respond to the question that asks if they had a problem with a large purchase or service, all follow-up questions pertaining to this type of problem would be set to "Not Stated" (9, 99, 999, etc.).

Imputation

For a small number of cases, the province of residence was not stated in the submitted questionnaires. In that situation, it was imputed with the province from the sample file.

Estimation

The principle behind estimation in a probability sample is that each unit in the sample "represents", besides itself, several other units not in the sample. For example, in a simple random 2% sample of the population, each unit in the sample represents 50 units in the population.

Weighting is a process which calculates, for each record, what this number is. This weight appears on the microdata file, and must be used to derive meaningful estimates from the survey.

The following steps were performed to calculate sampling weights for the CLPS:
1) Design weights were generated by computing the inverse of the probability of selection, taking into account the likelihood that a person was selected for the 2016 Longform census.
2) Persons who lived outside of one of the ten provinces, persons younger than 18 years of age, and the deceased are out-of-scope for the survey. These persons were removed from the weighting process.
3) The weights of the persons that responded to the survey were inflated to account for the persons that did not respond to the survey. Information from the Census frame was used for the adjustment.
4) The weights were calibrated so that the sum of the weights match demographic population counts derived from demography estimates and 2016 Census dissemination data.

Quality evaluation

Quality assurance measures were implemented at every collection and processing step. These measures included recruitment of qualified interviewers, training provided to interviewers for specific survey concepts and procedures, procedures to ensure that data captures are minimized and edit quality checks to verify the processing logics.
While rigorous quality assurance mechanisms are applied across all steps of the statistical process, validation and scrutiny of the data by statisticians are the ultimate quality checks prior to dissemination. Many validation measures were implemented. They included (among others):

- Verification of Estimates through Cross-tabulation
- Coherence Analysis Based on Known Current Events
- Consultation with Stakeholders Internal to Statistics Canada
- Consultation with External Stakeholders
- Review of external production processes

Disclosure control

Statistics Canada is prohibited by law from releasing any information it collects that could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.

Revisions and seasonal adjustment

This methodology type does not apply to this survey.

Data accuracy

Since the CLPS is a sample survey, all estimates are subject to both sampling and non-sampling errors.

Non-sampling errors can be defined as errors arising during the course of virtually all survey activities, apart from sampling. These include coverage errors, non-response errors, response errors, interviewer errors, coding errors, and other types of processing errors.

The response rate for the CLPS is 50.7%. Non-respondents often have different characteristics from respondents, which can result in bias. Attempts were made to reduce the potential nonresponse bias as much as possible through weighting adjustments.

Sampling error is defined as the error that arises because an estimate is based on a sample rather than the entire population. The sampling error for the CLPS is reported through 95% confidence intervals. The 95% confidence interval of an estimate means that if the survey were repeated over and over again, then 95% of the time (or 19 times out of 20), the confidence interval would cover the true population value.

Report a problem on this page

Is something not working? Is there information outdated? Can't find what you're looking for?

Please contact us and let us know how we can help you.

Privacy notice

Date modified: