Survey on Sexual Misconduct in the Canadian Armed Forces (SSMCAF)
Detailed information for 2016
The purpose of the Survey on Sexual Misconduct in the Canadian Armed Forces (SSMCAF) is to collect information about the prevalence and nature of inappropriate sexual behaviour within the military, the reporting of inappropriate sexual behaviour to authorities and military members' perception of the Canadian Armed Forces' (CAF) response to this issue.
Data release - November 28, 2016
The overall objective of this survey is to gather data on the prevalence and nature of self-reported sexual misconduct within the military workplace and/or involving military members, Department of National Defence (DND) employees, or DND contractors within or outside the military workplace.
The Survey on Sexual Misconduct in the Canadian Armed Forces (SSMCAF) has the following specific objectives:
- Measure the prevalence and nature of inappropriate sexual behaviours and sexual assault within the Regular Force and the Primary Reserve;
- Measure the prevalence and nature of discriminatory behaviours based on sex, sexual orientation, or gender identity;
- Measure military members' perceptions of the CAF's response to the issue of sexual misconduct.
Statistics Canada conducted all phases of the survey on a cost-recovery basis for the Canadian Armed Forces.
Reference period: Previous 12-month period and since joining the CAF.
- Society and community
Data sources and methodology
The questionnaire content development involved a review of international survey instruments developed and implemented to measure the prevalence of sexual misconduct in the context of the military environment. Subject matter experts within Statistics Canada (STC) were also consulted. In addition, content was developed in view of the comparability of SSMCAF measures with that of other STC surveys, namely the General Social Survey on Victimization. This was particularly the case with the classification of sexual assault.
Once the questions were developed, they underwent qualitative testing by the STC Questionnaire Design Resource Center (QDRC), to ensure that respondents could comprehend the questions, and to ensure that they were meaningful and would yield valid results.
Qualitative testing in the form of one-on-one interviews were conducted in four sites: Ottawa, Trenton, Val-Cartier, and Halifax. Interviewees included a mix of CAF members with different demographic and work-related characteristics, including sex, age, rank, environmental command and Regular Force/Reserve Force status.
The survey content was finalized in collaboration with the survey sponsors. An electronic questionnaire application was then developed, where the content was created into individual survey blocks and, eventually, into an integrated application.
This survey is a census with a cross-sectional design.
The sample design is a census of the in-scope population, namely Regular Force and Primary Reserve members. The sampling units consist of Regular Force and Primary Reserve members. The sample size and the population size are the same. The population is broken down by capability component: Army, Navy, Air Force, CMP, Other, for both the Regular Force and the Primary Reserve Force.
The census consisted of approximately 82,000 members of the Regular Force and Primary Reserve.
Data collection for this reference period: 2016-04-10 to 2016-06-24
Responding to this survey is voluntary.
Data are collected directly from survey respondents.
Data were collected using an electronic questionnaire (EQ).
The EQ application included a standard set of response codes to identify all possible outcomes and included soft edits to check the consistency of responses for some questions. The EQ application was tested prior to use to ensure that non-valid question responses would be flagged and that all question flows produced the correct result. These measures helped to limit the amount of response data that had to be "cleaned" at the end of the collection process.
In order to introduce the agency, the name and purpose of the survey, the collaboration with the Canadian Armed Forces and how the survey results would be used, the electronic application began with an introduction page. Persons were told that their participation in the survey was voluntary, and that their information would remain strictly confidential.
Prior to collection, Statistics Canada sent an introductory letter to each person with a valid mailing address via Canada Post and an email invitation to each person who had an available work email address. Mail and email reminders were sent multiple times during collection.
Responses to survey questions are captured directly by the secure network when respondents submitted questionnaires. Electronic questionnaires are often considered more appropriate for sensitive content, and can also reduce transcription errors and data transmission. The response data are encrypted to ensure confidentiality and transferred over a secure network for further processing.
Respondents were asked permission for Statistics Canada to link their survey data with the Department of National Defence administrative data, specifically the Human Resources Management System, for the sole purpose of obtaining information on respondents' command organization.
Statistics Canada did not link the survey data with any other information.
View the Questionnaire(s) and reporting guide(s) .
Electronic files containing the daily transmissions of completed respondent survey records were combined to create the "raw" survey file. Before further processing, verification was performed to identify and eliminate potential duplicate records and to drop non-response and out-of-scope records.
A number of out-of-scope respondents were identified during the data processing stage. A small percentage of the sample, identified through specific questions on current membership in the Regular Force and Primary Reserve Force, was determined to be out-of-scope at the time of electronic collection. In addition, some out-of-scope respondent records were found during the data clean-up stage. All respondent records that were determined to be out-of-scope and those records that contained no data were removed from the data file.
After the verification stage, editing was performed to identify errors and modify affected data at the individual variable level. The first editing step was to identify errors and determine which items from the survey output needed to be kept on the survey master file. Subsequent to this, invalid characters were deleted and the remaining data items were formatted appropriately.
The first type of errors treated were errors in questionnaire flow, where questions that did not apply to the respondent (and should therefore have remained unanswered) were found to contain answers. In these cases, a computer edit automatically eliminated superfluous data by following the flow of the questionnaire implied by answers to previous, and in some cases, subsequent questions. For skips based on answered questions, all skipped questions were set to "Valid skip" (6, 96, 996, etc.). For skips based on "Don't know" or "Refusal", all skipped questions were set to "Not stated" (9, 99, 999, etc.). The remaining empty items were filled with a numeric value (9, 99, 999, etc., depending on variable length). These codes are reserved for processing purposes and mean that the item was "Not stated".
Consistency edits were also applied.
No imputation methods were employed.
The principle behind estimation in a probability sample is that each unit in the sample "represents", besides itself, several other units not in the sample. For example, in a simple random 2% sample of the population, each unit in the sample represents 50 units in the population.
The weighting phase is a step which calculates, for each record, how many units in the population each sample unit represents. This weight appears on the microdata file, and must be used to derive meaningful estimates from the survey.
The SSMCAF is a census rather than a sample. If the response rate to the survey had been 100%, the weights would have been one for each respondent; however, there was some non-response to the survey. The survey weights were inflated to take into account the non-response. Each respondent to the SSMCAF now represents her or himself, and possibly several other non-respondents.
The following sub-section describes the method used to generate weights for the SSMCAF.
The weights were calculated in three steps:
1. Design Weight: The SSMCAF is a census; therefore, the design weights were 1 for all in-scope individuals on the survey frame.
2. Non-response Adjustments: The weights of the respondents were adjusted to represent the individuals who did not respond to the survey. The adjustment factors were calculated within non-response weighting classes formed using frame information. The frame variables age, sex and Regular Force/Reserve Force were used to define the weighting classes.
3. Calibration: The weights were calibrated so that the sum of the SSMCAF weights match frame counts for age group, sex, regular force / reserve force, rank and command organization.
The SSMCAF uses weighting adjustments for which there is no simple formula that can be used to calculate variance estimates. Therefore, the bootstrap method, a resampling method, was used to calculate variance estimates.
While rigorous quality assurance mechanisms are applied across all steps of the statistical process, validation and scrutiny of the data by statisticians are the ultimate quality checks prior to dissemination. Many validation measures were implemented, they include:
a. Verification of estimates through comparison of similar measures within the survey
b. Verification of estimates through cross-tabulations
c. Confrontation with other similar sources of data
d. Consultation with stakeholders internal to Statistics Canada
e. Consultation with external stakeholders
f. Coherence analysis based on quality indicators
Statistics Canada is prohibited by law from releasing any data which would divulge information obtained under the Statistics Act that relates to any identifiable person, business or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.
The SSMCAF micro data file does not contain any personal identifiers. Individual responses and results for very small groups will not be published or shared with Department of National Defence or the Canadian Armed Forces.
For aggregate or tabular data, confidentiality will be maintained by both cell collapsing and suppression of data where necessary.
Revisions and seasonal adjustment
This methodology does not apply.
Sampling error is defined as the error that results from estimating a population characteristic by measuring a portion of the population rather than the entire population. While the SSMCAF is a census, the estimates are not based on the entire population due to nonresponse. The methodology for this survey assumes that the respondents are a probability sample of the population.
One commonly used measure to quantify sampling error is the estimated standard error which is the square root of the estimated variance. Another commonly used measure is the coefficient of variation (CV) which is the estimated standard error divided by the estimate itself.
The overall response rate for the SSMCAF was 53%. The response rate for Regular Force was 61%, and the response rate for Primary Reserve was 36%.
Coverage errors consist of omissions, erroneous inclusions, duplications and misclassifications of units in the survey frame. Since they affect every estimate produced by the survey, they are one of the main sources of error. Coverage errors may cause a bias in the estimates and the effect can vary for different sub-groups of the population. This is a very difficult error to measure or quantify accurately.
For this survey, CAF members were able to call the STC survey help desk to become part of the survey, provided that they had not already been invited to participate and were in-scope. To determine if the member was eligible (in-scope) to complete the electronic questionnaire, help desk staff administered a short screening questionnaire. Over the course of collection, 0.3% of cases were added to the frame identified at the start of the survey. The low percentage suggests that we have low under-coverage of the population.
Measurement errors (sometime referred to as response errors) occur when the response provided differs from the real value; such errors may be attributable to the respondent, the questionnaire, the collection method or the respondent's record-keeping system. Such errors may be random; however, if they are not random, they may result in a systematic bias.
In this survey, to understand how much coherence there was in the reported data, information on the initial file was compared to information obtained directly from respondents. Sex was the same 99.8% of the time, rank was the same 98.9% of the time and age, commonly known to be understated in many surveys, was the same 95.5% of the time.
Several measures were taken prior to collection to decrease the chance of response error. These measures included questionnaire review and testing using cognitive methods to detect questionnaire design problems or misinterpretation of instructions.
Processing errors are associated with the activities conducted once survey responses have been received. They include all data handling activities after collection and prior to estimation. Like all other sources of error, processing errors can be random in nature, and inflate the variance of the survey estimates, or systematic, and introduce bias. It is difficult to obtain direct measures of processing errors and their impact on data quality, especially since they are mixed in with other types of errors (non-response, measurement and coverage).
Data processing of the SSMCAF was completed in a number of steps, including data capture, verification, coding, editing, etc. At each step, a picture of the output file was taken and verification was done by comparing files at the current and previous step. Completing these steps greatly improved the chance of that errors were not introduced at the data processing stage.
- Date modified: