Canadian Community Health Survey - Mental Health and Well-being - Canadian Forces (CCHS-CF)

Detailed information for 2002





Record number:


The Canadian Community Health Survey is a national health survey that asks Canadians about their health and well-being, the factors that affect their health and their use of health care services. This additional component was administered to a representative sample of the Canadian Forces (CF) population.

Data release - September 5, 2003


In Cycle 1.2 of the CCHS, an additional component was administered to a representative sample of the Canadian Forces (CF) population. One of the exclusions of the core CCHS target population is full-time members of the regular Canadian Forces. As the Department of National Defence (DND) wanted to be able to provide reliable, comparable information for the Canadian Forces, DND contracted Statistics Canada to undertake a special survey component with a representative sample of the Canadian Forces, both regular and reservist members.

Cycle 1.2 mainly measures aspects linked to the mental health of Canadians. This cycle was then named "Canadian Community Health Survey-Mental Health and Well-being". The primary objectives of the CCHS Mental Health and Well-being are to:

- Provide timely, reliable, cross-sectional estimates of mental health determinants, mental health status and mental health system utilization across Canada;
- Determine prevalence rates of selected mental disorders to assess the impact of burden of illness;
- Juxtapose access and utilization of mental health services with respect to perceived needs; and
- Assess the disabilities associated with mental health problems to individuals and society.

As a key component of the Population Health Surveys Program of Statistics Canada, the CCHS helps fulfil broader requirements of health issues in Canada. These are:

- Aid in the development of public policy;
- Provide data for analytic studies that will assist in understanding the determinants of health;
- Collect data on the economic, social, demographic, occupational and environmental correlates of health;
- Increase the understanding of the relationship between health status and health care utilization.

In Canada, the primary use of the data is for health surveillance, such as in prevalence of disease and other forms of health research. The data are used extensively by the research community and other health professionals. Federal and provincial departments of health and human resources, social service agencies, and other types of government agencies use the information collected from the respondents to plan, implement and evaluate programs to improve health and the efficiency of health services. Non-profit health organizations and researchers in the academic fields use the information to make research to improve health. The media uses the results from the surveys to raise awareness about health, an issue of concern to all.

The CCHS-CF and the CCHS cycle 1.2 share a number of disorders (major depressive episode, social phobia, panic disorder) but that CF also includes general anxiety disorder (GAD) and Post-traumatic syndrome disorder (PTSD), and excludes agoraphobia and mania.

Mental Health and Well-being -- CF Component has analogous objectives to those of the CCHS: to estimate the prevalence of certain mental disorders in the Canadian Forces and record members' utilization of mental health services. This information is intended to assist in the determination of mental health care needs in the CF and to allow DND planners the crucial data they need to ensure adequate resources.

Reference period: May to December 2002


  • Health

Data sources and methodology

Target population

The target population was all full time regular members of the Canadian Forces, and reservists who have paraded at least once in the past six months. As of May 2001, the Canadian Forces had approximately 57,000 full time regular force members and 24,000 reserve force members.

Instrument design

The content for this study is partly based on a selection of mental disorders from the World Mental Health Survey (WMH2000). The other content areas come from existing sources such as the Canadian Community Health Survey (Cycle 1.1) and other special studies. An Expert Group of mental health professionals guided the content development and strategic direction of the study. As well, the survey program is supported by a standing Advisory Committee, comprised of provincial and territorial ministries of health, Health Canada and Canadian Institute on Health Information (CIHI) representatives.

Qualitative testing for Cycle 1.2 took place in July and August, 2001 to evaluate respondent reactions with regards to the sensitivity of the subject matter and their ability to understand and willingness to respond to the questions.

Due to the complex nature of this survey content as well as to the rather probing nature of the questions, qualitative testing was undertaken for both the CCHS and separately for the CF Component. This qualitative testing consisted of one-on-one interviews and was organized by the Questionnaire Design Resource Centre at Statistics Canada. Its main purpose was to test acceptance of survey content and procedures and to test questionnaire wording and flow. The qualitative testing specific to the CF Component was principally testing acceptance of the PTSD survey content and procedures.

The CF Component also conducted focus group testing. The purpose of this testing was to: obtain information on the respondents willingness and ability to provide the information and determine what would encourage response; determine how the CF membership would react to Statistics Canada, an external agency, collecting the survey information; determine when to best and how to best contact the member; determine what measures that the membership would like to see to ensure confidentiality of the information and how the membership wanted the results to be made available.

Pilot testing of both the CCHS and CF Component was also conducted. The objectives of these tests were to: determine the respondents willingness and ability to provide the information; provide an indication of the flow, the length of the interview and the response rates; test the computer application and determine the effectiveness of the procedures designed to contact the respondents (principally of the CF Component)

Data collection took place monthly between May and December 2002. Those durations allowed for spreading the workload in the field and more time in which to contact respondents who might be departing or returning from field deployments and or training courses. The vast majority of CF Component interviews were conducted face-to-face during working hours in private on-base rooms, reserved by DND for survey interviewing.


This is a sample survey with a cross-sectional design.

It was decided that 5,000 responding full time members and 3,000 responding reservists would be required at survey completion to enable the analysis sought at satisfactory precision levels. To ensure the survey ended with the targeted responding populations, the sample sizes were enlarged before data collection to take into account out-of-scope persons and anticipated non-response. For this reason a sample size of 8,000 full time members and 4,800 reservists were initially selected.

The sample frame used for this component was an administrative list of all regular and reserve force members as of May 2001. This list was provided to Statistics Canada by the Department of National Defence (DND) and was taken from the Canadian Forces Peoplesoft Human Resource Database. This sample frame contained some information on each person such as gender, region, rank, military base, environment, mother tongue, etc.

For this survey, each target population (regular force members and reservists) was stratified by gender and rank. To avoid very small cells the rank characteristic was collapsed into three categories for the male group and two categories for the female group.

Once the stratification criteria are established the total sample has to be allocated between the strata. For this survey it was decided that a good balance between the reliability of the estimates for each design stratum and for the entire target population was required.

The sample was selected from the sample frame using a systematic sampling approach within each design stratum. Within each design stratum, the units were sorted by region (Atlantic, Quebec, Ontario, Prairies) and CF environment (land, air, sea) and the final sample was obtained using a systematic sampling scheme. Such an approach guaranteed a proportional representation of units for each region and CF environment although these criteria were not explicitly included in the stratification.

Because of the bulk of the work for the interviewers and the availability of CF members to participate in the survey, the sample of the CF regular force members was divided into two collection periods: May to August and September to November 2002. The allocation over the two collection periods was not done randomly but according to the availability of each selected person instead. This was possible because once the sample was selected, a tracing of all selected persons in the sample was conducted in order to determine the actual contact information of each member. During that tracing, the availability of each member was asked and then the sample was to either of the two collection periods using that information. For reserve members of CF, there was only one collection period (May to November 2002). So, the reserve sample was all sent in May 2002.

Also, for both the regular and reserve samples, a random 10% of the initial sample was put aside and not sent into the field for collection. This was done to counterbalance a higher response rate than originally forecast and incurring a collection budget deficit. In the case of low response rates, it was planned to send this held-back sample into the field in the second half of survey collection. In the end, as the response rates attained were very good, it was not necessary to send the held-back sample into the field for either the regular or reserve samples.

Data sources

Data collection for this reference period: 2002-05-09 to 2002-12-31

Responding to this survey is voluntary.

Data are collected directly from survey respondents.

Data collection took place continuously between May and December 2002. Those durations allowed for spreading the workload in the field and more time in which to contact respondents who might be departing or returning from field deployments and or training courses. The vast majority of CF Component interviews were conducted face-to-face during working hours in private on-base rooms, reserved by DND for survey interviewing.

The computer-assisted interviewing method (CAI) was used and the questionnaire was programmed in BLAISE.

CAI offers a number of data quality advantages over other collection methods. First, question text, including reference periods and pronouns, are customised automatically based on factors such as the age and sex of the respondent, the date of the interview and answers to previous questions.

Second, edits to check for inconsistent answers or out-of-range responses are applied automatically and on-screen prompts are shown when an invalid entry is recorded. Immediate feedback is given to the respondent and the interviewer is able to correct any inconsistencies.

Third, questions that are not applicable to the respondent are skipped automatically.

View the Questionnaire(s) and reporting guide(s) .

Error detection

Most editing of the data was performed at the time of the interview by the computer-assisted interviewing (CAI) application. It was not possible for interviewers to enter out-of-range values and flow errors were controlled through programmed skip patterns. Also, CAI ensured that questions that did not apply to the respondent were not asked.

In response to some types of inconsistent or unusual reporting, warning messages were invoked but no corrective action was taken at the time of the interview. Where appropriate, edits were instead developed to be performed after data collection at Head Office. Inconsistencies were usually corrected by setting one or both of the variables in question to "not stated".


Due to some technical problems in certain skip patterns of the suicide module, some respondents were not asked the questions required for the calculation of the derived variables '12-month suicidal thought' and '12-month suicidal attempt'. Consequently, important information was missing for those individuals (this represented 5.02% of all respondents for the '12-month suicidal thought' and 0.60% of all respondents for the '12-month suicidal attempt'). Moreover and because of their profiles, those individuals are more likely to have had a 12-month suicidal thought and/or a 12-month suicidal attempt which would have resulted in an underestimation of the prevalence. To fill in these missing responses, values were imputed using the approach described below.

Two methods of imputation were used, a deterministic method and one based on a logistic regression model. As it was possible to derive directly the missing value based on other responses for some respondents, a deterministic imputation method was first used. This was the case for all missing values for the 12-month suicidal attempt and for about one fourth of the missing values for the 12--month suicidal thought. For the remaining missing values of the 12--month suicidal thought, a logistic regression imputation method was used. The method consisted in fitting a logistic regression model between the variable to impute (the 12-month suicidal thought) and correlated characteristics using respondents without missing values who were similar to those to impute. Using the fitted model, a probability of response (yes or no) was calculated for each respondent who needed imputation; a response was then imputed based on that probability.


The principle behind estimation in a probability sample such as Cycle 1.2 is that each person in the sample "represents", besides himself or herself, several other persons not in the sample. For example, in a simple random 2% sample of the population, each person in the sample represents 50 persons in the population. In the terminology used here, it can be said that each person has a weight of 50. The weighting phase is a step that calculates, for each person, his or her associated sampling weight. This weight appears on the microdata file, and must be used to derive meaningful estimates from the survey. For example, if the number of individuals who had a major depressive episode is to be estimated, it is done by selecting the records referring to those individuals in the sample having that characteristic and summing the weights entered on those records. In order for estimates produced from survey data to be representative of the covered population and not just the sample itself, a user must incorporate the survey weights into their calculations. A survey weight is given to each person included in the final sample, that is, the sample of persons having answered the survey. This weight corresponds to the number of persons represented by the respondent for the entire population.

List of adjustments in the weighting (refer to User Guide for more information):

0 Initial weight
1 Sample increase or decrease
2 Stabilization
3 Removal of out-of-scope units
4 Household non-response
5 Creation of person level weight
6 Person non-response
7 Post-stratification

In order to determine the quality of the estimate and to calculate the CV, the standard deviation must be calculated. Confidence intervals also require the standard deviation of the estimate.

The CCHS uses a multi-stage survey design, which means that there is no simple formula that can be used to calculate variance estimates. Therefore, an approximate method was needed. The bootstrap method is used because the sample design information needs to be taken into account when calculating variance estimates. The bootstrap method does this, and with the use of the Bootvar program, remains a method that is fairly easy for users to use.

The bootstrap re-sampling method used in the CCHS involves the selection of simple random samples known as replicates, and the calculation of the variation in the estimates from replicate to replicate. In each stratum, a simple random sample of (n-1) of the n clusters is selected with replacement to form a replicate. Note that since the selection is with replacement, a cluster may be chosen more than once. In each replicate, the survey weight for each record in the (n-1) selected clusters is recalculated. These weights are then post-stratified according to demographic information in the same way as the sampling design weights in order to obtain the final bootstrap weights.

The entire process (selecting simple random samples, recalculating and post-stratifying weights for each stratum) is repeated B times, where B is large. The CCHS typically uses B=500, to produce 500 bootstrap weights. To obtain the bootstrap variance estimator, the point estimate for each of the B samples must be calculated. The standard deviation of these estimates is the bootstrap variance estimator. Statistics Canada has developed a program that can perform all of these calculations for the user: the Bootvar program.

Quality evaluation

Survey design has a profound effect on the objectives of the survey which are listed under "Survey Description". To meet these objectives, a Steering Committee and an Advisory Board comprised of authorities from the provincial and territorial Ministries of Health, the Canadian Institute for Health Information and Health Canada determined the concepts and focus. An expert group was convened to advise on the measures to obtain the results envisioned by the Steering Committee and Advisory Board, and to recommend proven collection vehicles and indices. The resulting data is recognized as valid measures of contemporary concepts.

Actions have been taken to reduce non-sampling errors to a minimum.

High response rates are essential for quality data. To reduce the number of non-response cases, the interviewers are all extensively trained by Statistics Canada, provided with detailed Interviewer Manuals, and are under the direction of interviewer supervisors. A maximum recommended assignment size by interviewer was calculated based on test results. Interviewers make every effort to contact respondents.

Refusals were followed up by senior interviewers, project supervisors or by other interviewers to encourage respondents to participate in the survey. In addition, to maximize the response rate, non-response cases were also followed up in subsequent collection periods.

For Cycle 1.2, the total workload imposed by the lengthy interview, complex content and difficult respondent burden in some cases was anticipated to pose a potential challenge for the data collection infrastructure in place. To ensure the success of collection operations, a number of strategies were put into place. In addition to customized interviewer training and special support vis-à-vis mental health issues, a monitoring system was put in place to ensure that data quality was maintained during collection. Various aspects related to the interview process were monitored at the interviewer level such as average interview time and item non-response. Regular weekly feedback from Head Office to the Regional Offices helped maintain good data quality and correct problems as they occurred. A validation process was also put in place in the field to monitor the quality of the work performed by the interviewers.

The questionnaires were developed in consultation with Statistics Canada's Questionnaire Design Resource Centre and were reviewed and tested in the field in pre-tests and focus groups in both official languages.

Data was collected using the Computer Assisted Interview (CAI) system that has built-in edits and skip patterns which were extensively tested. The resulting data have been determined to provide accurate measures of the concepts being surveyed.

A user guide that includes Data Dictionaries is provided to users to help explain validity and reliability concepts, variance, and to aid in the analysis of the data. To account for survey design effects, standard errors and coefficients of variation may be estimated with the bootstrap technique.

Disclosure control

Statistics Canada is prohibited by law from releasing any data which would divulge information obtained under the Statistics Act that relates to any identifiable person, business or organization without the prior knowledge or the consent in writing of that person, business or organization. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.

Before releasing and/or publishing any estimate from these files, users should first determine the number of sampled respondents who contribute to the calculation of the estimate. If this number is less than 30, the weighted estimate should not be released regardless of the value of the coefficient of variation for this estimate. For weighted estimates based on sample sizes of 30 or more, users should determine the coefficient of variation of the rounded estimate and follow the guidelines below.

Estimates in the main body of a statistical table are rounded to the nearest hundred units using the normal rounding technique. If the first or only digit dropped is zero to four, the last digit retained is not changed. If the first or only digit dropped is five to nine, the last digit retained is raised by one. Marginal sub-totals and totals in statistical tables are derived from their corresponding unrounded components and then are rounded themselves to the nearest 100 units using normal rounding methods. Averages, proportions, rates and percentages are computed from unrounded components (for example, numerators and/or denominators) and then are rounded themselves to one decimal using normal rounding. In normal rounding to a single digit, if the final or only digit dropped is zero to four, the last digit retained is not changed. If the first or only digit dropped is five to nine, the last digit retained is increased by one. Sums and differences of aggregates (or ratios) are derived from their corresponding unrounded components and then are rounded themselves to the nearest 100 units (or the nearest one decimal) using normal rounding. Under no circumstances are unrounded estimates, published or otherwise, released. Unrounded estimates imply greater precision than actually exists.

Revisions and seasonal adjustment

This methodology does not apply to this survey.

Data accuracy

A total of 5,155 Regular Force members were interviewed, yielding a response rate of 79.5%. For the Reserve Force the analogous numbers were 3,286 members interviewed and a response rate of 83.5%.

Most editing for the CCHS is conducted at the time of the interview by the Computer Assisted Interview (CAI) application. Some types of inconsistent or unusual reporting were edited after data collection at Head Office. Inconsistencies were usually corrected by setting answers to questions to 'not stated'.

Most of the reported information for Cycle 1.2 on mental health was left as is. Because of the potential sensitivity of the content, it was felt inappropriate to question the respondents on inconsistent answers during the interview and the information was left as collected on the file to allow researchers to deal with it as they see fit given that the answers could be subject to different interpretation.

A detailed User Guide was developed to provide all the relevant background information on the survey (background, methodology, data quality, data dictionary, derived variables specifications, etc).


Date modified: