Canadian Survey of Cyber Security and Cybercrime (CSCSC)

Detailed information for 2021





Record number:


The purpose of the 2021 Canadian Survey of Cyber Security and Cybercrime is to measure the impact of cybercrime on Canadian businesses.

The survey gathers information about:
- The measures businesses have implemented for cyber security, including employee training;
- The types of cyber security incidents that impact businesses; and
- The costs associated with preventing and recovering from cyber security incidents.

Data release - October 18, 2022


The 2021 Canadian Survey of Cyber Security and Cybercrime is conducted on behalf of Public Safety Canada. This survey was launched because of the need to benchmark and monitor the rapidly evolving environment surrounding cyber security and cybercrime. The data collected serves the following broad objectives: to further understand the impact of cybercrime on Canadian businesses including aspects such as investment in cyber security measures, cyber security training, the volume of cyber security incidents and the costs associated with responding to these incidents.

Reference period: The 12-month calendar year

Collection period: January through March


  • Business and government internet use
  • Crime and justice
  • Information and communications technology

Data sources and methodology

Target population

The target population was derived from Statistics Canada's Business Register (BR). The BR is an information database on the Canadian business population and serves as a frame for all Statistics Canada business surveys. It is a structured list of businesses engaged in the production of goods and services in Canada.
This survey covers enterprises operating in Canada in almost all industrial sectors. The industries in the target population are based on the 2017 North American Industry Classification System (NAICS). Sectors 11, 21, 22, 23, 31, 32, 33, 41, 44, 45, 48, 49, 51, 52, 53, 54, 55, 56, 61, 62, 71, 72 and 81 were included. The following entities were excluded to arrive at the target population:

- Enterprises with less than 10 employees;
- Crop production (111), animal production and aquaculture (112), fishing, hunting and trapping (114), support activities for crop production (1151), support activities for animal production (1152), specialty trade contractors (238), head offices (551114), private households (814), and public administration (91);
- Enterprises from sectors 11, 23, 48, 49, 53, 54, 56, 61, 62, 71, 72 and 81 with a revenue less than $100,000;
- Enterprises from sectors 21, 22, 31, 32, 33, 41, 44, 45, 51, 52 and 55 with a revenue less than $250,000.

Instrument design

The survey data are collected using an electronic questionnaire.

The questionnaire had minor revisions compared to the previous version in 2019 to better meet the policy needs of the sponsoring partner: Public Safety Canada. Subject matter experts, private businesses and external stakeholders were also consulted during the content development process.

Cognitive testing of the questionnaire content was carried in conjunction with the Questionnaire Design Resource Center based at Statistics Canada in both official languages. For the 2017 iteration, the entire questionnaire was tested through one-on-one interviews with potential respondents that took place in Ottawa, Toronto, Montreal and Vancouver. For the 2019 and 2021 iterations, the revised content was tested through one-on-one telephone interviews with potential respondents. The resulting comments and analysis of these interviews led to revisions of the questionnaire to make the questions more relevant to respondents and easier to answer.


This is a sample survey with a cross-sectional design.

The survey frame was constructed by selecting all enterprises from the BR that met the definition of the target population. The number of enterprises within the target population was 188,359 enterprises.

Enterprises are stratified by industry and size categories. The size category is based on the number of employees of the enterprise. Three size categories were built: small (10-49 employees), medium (50-249 employees) and large (250 or more employees).

The overall size of the survey sample was determined based on the following:

- Domains specified at the 2-digit, 3-digit or 4-digit level of NAICS, depending on the sector;
- An expected standard error (SE) with equal quality by the 3 employment size groups of 4.75% for the 2-digit level of NAICS and 5.85% in specific 3-digit or 4-digit level of NAICS, for a reported proportion of 50%;
- A response rate of 65%.

The probability of a particular enterprise being selected in the survey sample was determined based on the proportion of the survey sample allocated to its domain, the number of enterprises in its domain in the population and its revenue.

The final sample size was 12,158 enterprises.

Data sources

Data collection for this reference period: 2022-01-10 to 2022-03-31

Responding to this survey is mandatory.

Data are collected directly from survey respondents.

Data are collected through an electronic questionnaire. Businesses are initially contacted by telephone during a pre-contact phase to identify an appropriate contact within the enterprise to respond to the survey.

Follow-up because of non-response, inconsistent or missing data is done by phone on a priority basis.

View the Questionnaire(s) and reporting guide(s) .

Error detection

Error detection is an integral part of both collection and data processing activities. Automated edits are applied to data records during collection to identify and correct reporting errors.

The processing phase of the survey was for the most part concerned with applying edits for consistency and validity to each record. Consistency edits ensure that data reported in one question does not contradict information reported in another question. Validity edits ensure that the data reported is valid (e.g., percentages do not exceed 100%).

Outlier detection checks were also conducted for key variables during data processing. Some outliers that could not be validated were replaced by imputed values.


The imputation of non-responses and erroneous records was performed using the nearest neighbour donor imputation procedure in the BANFF generalized system. This procedure uses a nearest neighbour approach to find, for each record requiring imputation, a valid record that is most similar to it and that will ensure that the imputed record does not violate any of the logical flows and input restrictions of the questionnaire.

Similar records are found by defining imputation classes which take into account other variables that are correlated with the missing or erroneous values. When a nearest neighbour cannot be found for some records in the most specific imputation classes, the definition of the imputation classes are expanded and the data are reprocessed. This imputation processing continues using a predetermined sequence until nearest neighbour donors are assigned to all records requiring imputation.


The response values for sampled units were multiplied by a final weight in order to provide an estimate for the entire population. The final weight was calculated using several factors, including the probability for a unit to be selected in the sample and an adjustment to represent the units that could not be contacted or that refused to respond. Using a statistical technique called calibration, the final set of weights was adjusted in such a way that the sample represents as closely as possible the entire population.

Sampling error was measured by the standard error (SE) for proportions and by the coefficient of variation (CV) for numerical responses. These measures represent the proportion of the estimate that comes from the variability associated with sampling. The SEs and CVs were calculated and are indicated in the data tables by quality flags.

Quality evaluation

Prior to the data release, combined survey results are analyzed for comparability. This analysis includes a detailed review of:

- individual responses (especially for the largest organizations);

- coherence with results from other surveys and studies related to Cyber Security and Cybercrime, including previous iterations of the survey; and

- information from other external sources (e.g. annual reports, news articles).

Disclosure control

Statistics Canada is prohibited by law from releasing any information it collects that could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.

Revisions and seasonal adjustment

This methodology does not apply to this statistical program.

Data accuracy

Errors may occur for various reasons during the collection and processing of the data. For example, non-response is one possible source of error. Under or over-coverage of the population, differences in interpretation of questions and mistakes in recording and processing data are other possible errors. To the maximum extent possible, these errors are minimized through careful design of the survey questionnaire and verification of the survey data.

The data accuracy indicators used for the CSCSC are the standard error and the coefficient of variation. The standard error is a commonly used statistical measure indicating the error of an estimate associated with sampling. The coefficient of variation is the standard error expressed as a percentage of the estimate.

Data quality indicators for the survey are based on the standard error (SE) or the coefficient of variation (CV), and the imputation rates. Quality indicators indicate the following for SE: A is excellent (SE up to 2.5%); B is very good (SE 2.5% up to 5.0%); C is good (SE 5.0% up to 7.5%); D is acceptable (SE 7.5% up to 10.0%); E is use with caution (SE 10.0% up to 12.5%); and F is too unreliable to be published (SE 12.5% or higher). Quality indicators indicate the following for CV: A is excellent (CV up to 5%); B is very good (CV 5% up to 10%); C is good (CV 10% up to 15%); D is acceptable (CV 15% up to 20%); E is use with caution (CV 20% up to 25%); and F is too unreliable to be published (CV 25% or higher).

Response rates:

The response rate at the estimation phase was 74%.

Non-response bias:

In addition to increasing variance, non-response can result in biased estimates if non-respondents have different characteristics from respondents. Non-response is addressed through survey design, respondent follow-up, reweighting, and verification and validation of microdata. Other indicators of quality such as the response rate are also provided.

Coverage error:

The Business Register was used as the frame.

Date modified: