Canadian Survey of Cyber Security and Cybercrime (CSoCC)

Detailed information for 2017

Status:

Active

Frequency:

Occasional

Record number:

5244

The purpose of this survey is to collect data on the impact of cybercrime to Canadian businesses and their activities to mitigate the effects. The survey includes information on investment in cyber security measures, cyber security training, the volume of cyber security incidents, and the costs associated with responding to these incidents.

Data release - October 15, 2018

Description

The Canadian Survey of Cyber Security and Cybercrime is conducted on behalf of Public Safety Canada. This survey was launched because of the need to benchmark and monitor the rapidly evolving environment surrounding cyber security and cybercrime. The data collected serves the following broad objectives: to further understand the impact of cybercrime on Canadian businesses including aspects such as investment in cyber security measures, cyber security training, the volume of cyber security incidents and the costs associated with responding to these incidents. Data from this survey are intended for:

a) Public Safety Canada and other Federal Departments to support the development of evidence-based policy in Canada;
b) Provincial governments, academic researchers, and the Canadian public to better understand the impact of cybercrime on Canadian businesses;
c) Industry associations and private businesses to study the characteristics of cyber security and cybercrime within their industry.

As an emerging issue, data of this type has not been collected by the Government of Canada previously on this scale.

Reference period: The 12-month calendar year

Collection period: Pre-contact period, mid-November to mid-December
Collection period, January through March

Subjects

  • Business and government internet use
  • Individual and household internet use
  • Information and communications technology

Data sources and methodology

Target population

The target population consists of private firms in almost all industrial sectors. Enterprises that had under $100K or $250K in revenue, depending on the sector, were excluded from the population frame.

To reduce response burden on small businesses, only enterprises with at least 10 employees were considered for sample selection.

Instrument design

The survey data are collected using an electronic questionnaire.

The questionnaire content was developed through consultation with the clients at Public Safety, subject matter experts, other federal departments, private businesses and external stakeholders.

Cognitive testing of the questionnaire content was carried out in two phases in conjunction with the Questionnaire Design Resource Center based at Statistics Canada in both official languages. The first round of testing concentrated on validating respondents' understanding of concepts, questions, terminology, the appropriateness of response categories and the availability of requested information. The content was tested through one-on-one interviews with 23 potential respondents that took place in Ottawa, Montreal and Vancouver. The resulting comments and analysis of these interviews led to further revision of the questionnaire to make the questions more relevant to respondents and easier to answer.

For the second phase, a total of seventeen one-on-one cognitive interviews took place in Montreal and Toronto to assess the updated questionnaire content (based on the results of the previous round of testing) using mock-up screen shots in an updatable PDF form to simulate an electronic questionnaire (EQ). This final round of testing confirmed that respondents could navigate through the EQ application with ease while providing the requested information.

The primary challenge in developing the questionnaire was to balance the data needs of the users with the ability and willingness of potential respondents to provide the information.

Sampling

This is a sample survey with a cross-sectional design.

The sample design is a stratified random sample of enterprises classified to the North American Industry Classification System (NAICS) Canada 2012.

Sampling unit:
The sampling unit is the enterprise.

Stratification method:
The stratification corresponds to the industry (ranging from 2-digit level of NAICS to 3-digit level of NAICS) and the enterprise size (based on the number of employees).

- Small enterprises (10 to 49 employees)
- Medium-sized enterprises (50 to 249 employees)
- Large enterprises (250 or more employees)

Sampling and sub-sampling:
The sample size for the survey is around 12,000 units that will be selected from the Business Register, with an expected response rate of 60%. The target standard error (S.E.) with equal quality by the 3 employment size groups is 0.0475 for the calculation of proportions at the 2-digit level of NAICS and 0.0585 S.E. targeted in specific 3-digit or 4-digit NAICS.

Data sources

Data collection for this reference period: 2018-01-03 to 2018-03-26

Responding to this survey is mandatory.

Data are collected directly from survey respondents.

Electronic or E-questionnaires are used to collect data from respondents. Before questionnaires are sent out, all sampled enterprises are contacted to collect the name and email for the respondent with enough knowledge of the enterprise and its computer and network security to complete the survey (e.g. IT manager or senior member of staff). Invitations to complete E-questionnaires are sent to respondents with email addresses. Access codes are mailed to respondents for sampled units with no email address. Intensive non-response follow-up is conducted by email and telephone as appropriate.

View the Questionnaire(s) and reporting guide(s) .

Error detection

Error detection is an integral part of both collection and data processing activities. Automated edits are applied to data records during collection to identify reporting errors.

The processing phase of the survey was for the most part concerned with applying consistency edits and validity edits to the data reported at the micro level. Consistency edits ensure that data reported in one question does not contradict information reported in another question. Validity edits ensure that the data reported is valid (i.e. that skip patterns are followed, etc.).

Outlier detection edits were also applied for key variables during data processing, first at a macro level to find outliers, and then at a micro level to trigger imputation.

Imputation

The imputation of item non-responses is performed using the nearest neighbour donor imputation procedure in the generalized system BANFF. This procedure uses a nearest neighbour approach to find, for each record requiring imputation, the valid record that is most similar to it and that will allow the imputed recipient record to pass the specified imputation edits and post edits.

These similar records are found by taking into account other variables that are correlated with the missing/incorrect values via the customized imputation classes and matching variables for each variable to be imputed. If nearest neighbour donors are not found for all recipients, then it is necessary to be less restrictive by extending the imputation classes and reprocessing the data. This imputation processing continues using a predetermined sequence until nearest neighbour donors are assigned to all records requiring imputation or until no nearest neighbour donors are available. During imputation, edits and post edits are applied to ensure that the resulting record does not violate any of the specified edits.

Estimation

The sample used for estimation comes from a one-phase sampling process. An initial sampling weight (the design weight) is calculated for each unit of the survey and is the inverse of the probability of selection. It is then adjusted to compensate for complete non-response.

The final weight calculated for each sampling unit indicates how many other units it represents. The final weights are usually either one or greater than one. Sampling units which are selected with certainty (must-take units) have sampling weights of one and only represent themselves.

The sampling unit, being the enterprise, is considered an estimation unit. The characteristics of the estimation units are used to calculate aggregate estimates, including industrial classification. Estimation for the survey portion is done by simple aggregation of the weighted values of all sampled enterprise found in the domain of estimation.

Quality evaluation

Prior to the data release, combined survey results are analyzed for comparability. This analysis includes a detailed review of:
- individual responses (especially for the largest organizations),
- coherence with results from other surveys and studies related to Cyber Security and Cybercrime, and
- information from other external sources (e.g. annual reports, news articles).

Disclosure control

Statistics Canada is prohibited by law from releasing any information it collects that could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.

Revisions and seasonal adjustment

This methodology does not apply to this statistical program.

Data accuracy

There are two types of errors which can impact the data: sampling errors and non sampling errors. Non-sampling errors may occur for various reasons during the collection and processing of the data. For example, non-response is an important source of non-sampling error. Under or over-coverage of the population, differences in the interpretations of questions and mistakes in recording and processing data are other examples of non-sampling errors. To the maximum extent possible, these errors are minimized through careful design of the survey questionnaire and verification of the survey data.

The data accuracy indicators used for the CSoCC are the standard error and the coefficient of variation. The standard error is a commonly used statistical measure indicating the error of an estimate associated with sampling. The coefficient of variation is the standard error expressed as a percentage of the estimate.

Data quality indicators for the survey are based on the standard error (SE) and the imputation rates. Quality indicators indicate the following: A is excellent (SE up to 2.5%); B is very good (SE 2.5% up to 5.0%); C is good (SE 5.0% up to 7.5%); D is acceptable (SE 7.5% up to 10.0%); E is use with caution (SE 10.0% up to 12.5%); and F is too unreliable to be published (SE 12.5% or higher).

Response rates:
The response rate at the estimation phase is 88.04%.

Non-response bias:
In addition to increasing variance, non-response can result in biased estimates if non-respondents have different characteristics from respondents. Non-response is addressed through survey design, respondent follow-up, reweighting, and verification and validation of microdata. Other indicators of quality such as the response rate are also provided.

Coverage error:
The Business Register was used as the frame.

Date modified: