Canadian Armed Forces Transition and Well-being Survey (CAFTWS)

Detailed information for 2016

Status:

Active

Frequency:

One Time

Record number:

5242

The purpose of this survey is to better understand the transition to civilian life, its impact on the health of released Canadian Armed Forces members, as well as to provide information that may help to improve Department of National Defence and Veterans Affairs Canada programs and services offered to transitioning Canadian Armed Forces members and their families.

Data release - March 14, 2018

Description

The Canadian Armed Forces Transition and Well-being Survey (CAFTWS) is conducted by Statistics Canada in collaboration with the Department of National Defence. The objectives of the CAFTWS are to collect information on:

- Adjustment, experiences and challenges faced during the transition from military to civilian life of those with two years or more of experience, recently released (in 2016) CAF Regular Force members, as well as experiences of spouses or partners;

- General health and well-being and the types of help received by recently released members from their spouse or partner;

- The types of challenges experienced by recently released members and their spouse or partner;

- The types of programs and services used by recently released CAF members and their families.

CAFTWS determinants of health include respondent reported general health and well-being, social support, education, employment and general levels of income.

Subjects

  • Health

Data sources and methodology

Target population

The data for the sampling frame was provided by the Department of National Defence (DND). It included Regular Force members with at least 2 years of service (730 or more days) who had released in 2016. The frame from DND excluded those who were released for misconduct or unsatisfactory service. The CAFTWS included two sample population components. The first target population sample derived from the DND frame was for contacting released members for a computer-assisted interview (CAI). The second target population was the spouse or partner of the released CAF members for a self-complete, paper questionnaire component. This target population was identified during the initial CAI interview with released members.

Instrument design

This is the first time that the Canadian Armed Forces Transition and Well-being Survey (CAFTWS) was conducted by Statistics Canada. The questionnaire content development process relied heavily on the use of pretested and previously developed questions included in a pilot study, from the Life After Service Survey (LASS) 2016 content, the Canadian Community Health Survey (CCHS) 2016 content and Statistics Canada harmonized content, with some new content regarding the use of existing programs and services available to released CAF members and their families.

Previously developed harmonized content was used or modified for the military context, to reduce development and testing time for generic blocks such as income, labour force participation and education. Questions from the Canadian Community Health Survey were used for blocks such as chronic conditions, depression, pain and discomfort, limitations in daily activities, and social support.

The questionnaire underwent qualitative testing by the Questionnaire Design Resource Centre (QDRC) to determine if any changes were necessary to ensure that respondents could understand and provide accurate responses to the questions being asked. Qualitative testing was also performed to test new and revised content with released CAF members.

A total of 24 face-to-face cognitive interviews were conducted with released CAF members while their spouses or partners were invited to fill out a paper questionnaire. The objective of these test interviews was to determine if the content was understood and if participants were willing and able to answer the questions. Another objective of the qualitative testing was to determine the tolerance level of response burden, given the length and topic areas of the CAFTWS.

Sampling

CAFTWS is a cross-sectional survey. The CAI component features a stratified systematic random sample of Canadian Armed Forces Regular Force personnel, with 2 or more years of service who were released in 2016 and reside in one of the ten Canadian provinces.

Upon contacting one of the sampled CAI units, they were asked if they have a spouse or partner. If they do, the spouse or partner was asked to participate in the CAFTWS by filling out a self-complete, paper questionnaire.

Data sources

Data collection for this reference period: 2017-04-01 to 2017-06-30

Responding to this survey is voluntary.

Data are collected directly from survey respondents.

The collection period was from April 1 to June 30, 2017 for the CAI survey component. Collection was managed by the regional offices in Montreal, Toronto, Edmonton and Halifax. The collection mode with released members was Computer Assisted Personal Interview (CAPI) with an option to complete the interview by telephone (CATI) if the CAPI visit was not at all possible.

The paper questionnaire was given to the spouse or partner at the time of the interview, so it could be completed while the interview with the released member was being conducted. If the spouse was not available at that time, the spousal package was left for the spouse to complete and then mailed back by the spouse in a pre-addressed, postage paid envelope. If the released member was only contacted by telephone, then the paper questionnaire was mailed to their spouse. Paper questionnaires received until October 11, 2017 were considered as spouse/partner respondents.

Prior to collection, interviewers underwent training to introduce pertinent content covered in the CAI and paper questionnaires and to familiarize them with the questions using interview scenarios. CAI help screens provided information to assist them in answering respondents' questions. For respondents located on an Indian Reserve, the regional offices determined procedures for contacting these respondents. When a valid mailing address was available, an introductory letter was mailed from the appropriate regional office. Up to two alternate addresses were provided to the regional offices to assist with tracing activities.

Interviewers followed a standard approach to introduce the agency, the name and purpose of the survey, the collaboration with the Department of National Defence, how the survey results would be used and when the results would become available. Respondents were told that their participation in the survey was voluntary and that their information would remain confidential. Proxy responses on behalf of persons selected into the sample were not accepted. Partial interviews were not accepted. On average, a complete interview lasted approximately 42 minutes.

Released members were considered in-scope if their age on the day of the interview was within +/- 1 year from their age based on date of birth on the frame. Released members were considered to be out of scope (OOS) if they had rejoined the CAF, if they were living outside of the ten provinces or if deceased.

CAI responses were captured directly by an interviewer at the time of the interview using a computerized application. This reduced processing time, costs associated with data entry, transcription errors and data transmission. The data were encrypted to ensure confidentiality and transferred over a secure network for further processing.

For the spousal, paper component, the spouse or partner of the released member was considered in scope if married to or living common law with the released member at the same address, at the time of the interview (or separated but able to and willing to participate). The completed spousal questionnaires were scanned for image capturing and processing. The scanning, data processing and record layout for data capture were tested to ensure that question flows and data were captured correctly and consistently. The data were transferred using protocols for ensuring confidentiality over a secure network for further processing.

View the Questionnaire(s) and reporting guide(s).

Error detection

BLAISE and SAS based common tools were used to collect and process this survey. Some editing was done directly in the CAI application at the time of the interview. Where the information entered was out of range (too large or small) of expected values, or inconsistent with the previous entries, the interviewer was prompted, through message screens on the computer, to modify the information. However, for some questions interviewers had the option of bypassing the edits, and of skipping questions if the respondent didn't know the answer or refused to answer. Therefore, the response data were subjected to further edit and imputation processes once they arrived in head office.

Electronic text files containing the daily transmissions of completed cases were combined to create the "raw" survey file. At the end of collection, this file contained one record for each sampled individual. Before further processing, verification was performed to identify and eliminate potential duplicate records and to identify non-response and out-of-scope records.

A very small percentage of the sample was defined as out-of-scope at time of the interview due to death, moving to an institution or moving outside of the country or if the respondent had returned to work and were currently serving in the Canadian Armed Forces. A few other (non-responding) records were identified as out-of-scope after collection, based on new information provided by DND.

A criterion was defined for dropping non-response records. The respondent must have answered approximately 80% of the questionnaire to have been considered complete and usable data.

Editing consisted of modifying the data at the individual variable level. The first step was to determine which items from the survey output needed to be kept on the survey master file. Subsequently, invalid characters were deleted and the data items were formatted appropriately. Text fields were stripped off the main files and written to a separate file.

The first type of errors treated were within the flow of the questionnaires, for questions that did not apply to the respondent (and should therefore not have been answered). In this case a CAI computer edit for example, automatically eliminated superfluous data by following the flow of the questionnaire implied by answers to previous and in some cases, subsequent questions. For skips based on answered questions, all skipped questions were set to "Valid skip" (6, 96, 996, etc.). For skips based on "Don't know" or "Refusal", all skipped questions were set to "Not stated" (9, 99, 999, etc.). The remaining empty items were filled with a numeric value (9, 99, 999, etc. depending on variable length). These codes were reserved for processing purposes and meant that the item was "Not stated". In the case of the spousal paper questionnaire, the scanned data were edited using the same editing principles as used for CAI responses, where applicable. Also, flow edits using a bottom up approach were applied to spousal data to ensure any meaningful data would not be lost.

Imputation

No imputation methods were employed to complete missing blocks of survey data.

Estimation

Weights were produced for each responding unit in the CAI sample in order to allow estimates to be inferred to the population of interest. The initial weights were determined by the probabilities of being selected into the sample, which vary according to the population size and sample allocation in each stratum.

The initial weights of the non-responding units were redistributed to the responding units, within response-homogeneous groups (or RHGs). First, characteristics from the frame and from the data collection itself were studied to see which were relevant in predicting the propensity to respond. This was done using chi-squared tests of independence, and then by adjusting various logistic regression models and cluster analyses). The weights of the responding units were then calibrated to match these original stratum totals, and the out of scope (OOS) units were then removed from the file. More details are provided in the "Non-response bias" section of this document.

The frame of Spousal units was identified during the CAI process, and each Spousal unit was initially assigned the final weight of its CAI counterpart. As in the previous step, a non-response weighting adjustment was performed so that the weights of the Spousal non-respondents were transferred to the Spousal respondents. As before, RHGs were formed by using the same series of statistical methods (chi-squared tests, logistic regressions, and cluster analyses), and weights adjustments were done within these groups. There were no OOS Spousal units. The adjusted Spousal weights were calibrated to match the original number of units on the Spousal frame.

For each set of weights produced (CAPI and Spousal), bootstrap weights were also produced by resampling the sampled units and applying the same weight adjustments to the bootstrap weights. Due to the high sampling fractions and to other design considerations, the traditional methods used to create the bootstrap weights were modified to ensure that they produced appropriate variance estimates. The bootstrap weights were used to produce estimates of the precision such as standard errors, variances and coefficients of variation (CVs).

Quality evaluation

While rigorous quality assurance mechanisms are applied across all steps of the statistical process, validation and scrutiny of the data by statisticians are the ultimate quality checks prior to dissemination. Many validation measures were implemented, including:

a. Verification of estimates through cross-tabulations
b. Confrontation with other similar sources of data
c. Consultation with stakeholders internal to Statistics Canada
d. Consultation with external stakeholders
e. Coherence analysis based on Quality Indicators

Although the survey was designed to be carried out via personal interviews, telephone interviews were permitted under certain circumstances (remote geographical locations, for example). Analyses of the collection process revealed that the number of telephone interviews exceeded expectations. A post hoc Mode Effect analysis was performed, to evaluate whether this could have introduced some bias into the results. Only one variable (Regional office) was related to the mode; weight adjustments were performed to mitigate the impact of any potential bias.

Disclosure control

Statistics Canada is prohibited by law from releasing any information it collects which could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication of disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data. Personal identifiers are removed from the file.

Revisions and seasonal adjustment

Not applicable.

Data accuracy

One of the CAFTWS's key results is how the released CAF member adjusted to civilian life. The data suggest that 38.0% of released CAF members found it "very difficult" or "moderately difficult" to adjust to civilian life. The associated Coefficient of Variation (CV) for this estimate was 3.0%. A 95% confidence interval for this estimate could be given as [35.9%, 40.2%].

Instead of estimated proportions at the national level, we can consider the estimate of an average at the stratum level. The classification of medical versus non-medical release self-reported by respondents matched what was identified in the DND survey frame at a rate of 99% of the time.

A total of 1,414 CAF released members completed the CAI interview. The collection response rate for CAF released members (excluding the out of scope cases) was 75%. A total of 595 paper questionnaires were completed by spouses or partners of CAF released members. The collection completion rate for spouses and partners of CAF released members was 70%.

Non-sampling errors: Measurement errors (or sometimes referred to as response errors) occur when the response provided differs from the real value; such errors may be attributable to the respondent, the interviewer, the questionnaire, the collection method or the respondent's record-keeping system. Such errors may be random or they may result in a systematic bias if they are not random.

It is very costly to accurately measure the level of response error and very few surveys conduct a post-survey evaluation. However, interviewer feedback and observation reports usually provide clues as to which questions may be problematic (poorly worded question, inadequate interviewer training, poor translation, technical jargon, no help text available, etc.).

Several measures were taken to reduce the level of response error. These measures include questionnaire review and testing using cognitive interviewing methods, use of highly skilled interviewers, extensive training of interviewers with respect to the survey procedures and content, and observation and monitoring of interviewers to detect problems of questionnaire design or misunderstanding of instructions, regular feedback and follow-up with interviewers during collection and post-collection follow-up discussions.

In order to reduce bias resulting from non-response, unit non-response was treated through adjustments that transfer weights of non-respondents to respondents with similar modeled response propensities.

During collection, some of the sampled respondents were found to be out of scope (OOS), while others did not respond to the survey. The weights of the OOS units were used to adjust population totals for further processing. Non-response usually occurs when a respondent refuses to participate in the survey, provides unusable data, or cannot be reached for an interview. Weights of the nonresponding units were redistributed to responding units with similar characteristics within response homogeneity groups (RHGs). In order to create the response homogeneity groups, a scoring method based on logistic regression models was used to determine the propensity to respond. These response probabilities were used to divide the sample into groups with similar response properties.

The information for non-respondents was limited, but information was available from the frame, and limited information was available from the collection process itself.

The following variables were kept in the final logistic regression model by the stepwise selection method, i.e. they were significant in predicting the propensity to respond: Years of experience in the Armed Forces, the Regional Office responsible for collection, and the environment where the released member spent the majority of their career (land, sea or air).

An adjustment factor was calculated within each response group as follows:
Sum of weights for units entered in the model (respondents, non-respondents)
/ Sum of weights for all responding units found during collection.

The initial weight is multiplied by this factor to produce an adjusted weight for the responding units. These weights were then calibrated, so that they add up to the stratum totals on the frame (after having adjusted these for stratum jumpers and OOS units). This gave the final master weight for all responding units.

Coverage errors consist of omissions, erroneous inclusions, duplications and misclassifications of units in the survey frame. Since they affect every estimate produced by the survey, they are one of the most important types of error; in the case of a census they may be the main source of error. Coverage errors may cause a bias in the estimates and the effect can vary for different sub-groups of the population. This is a very difficult error to measure or quantify accurately.

Efforts were made to reduce the risk of coverage error. The frame provided by DND was already judged to be of high quality, but some cleaning and processing was done after it was received at Statistics Canada. Information gathered during data collection (for example on out of scope cases) was also used to adjust the frame counts during data processing.

Processing errors are those associated with activities conducted once survey responses have been received. It includes all data handling activities after collection and prior to estimation. Like other sources of error, these can be random, and inflate the variance of the survey's estimates, or systematic, and introduce bias. It is difficult to obtain direct measures of processing errors and their impact on data quality especially since they are mixed in with other types of errors (non-response, measurement and coverage).

Data processing of the CAFTWS was done in a number of steps including data capture, verification, coding, editing, etc. At each step a picture of the output files was taken and an easy verification was made, comparing files at the current and previous step. This greatly improved the data processing stage.

Date modified: