Survey of Approaches to Educational Planning (SAEP)

Detailed information for 2013

Status:

Active

Frequency:

Occasional

Record number:

4442

The primary objective of the Survey of Approaches to Educational Planning (SAEP) is to improve our understanding of the processes by which the parents/guardians of children aged 0-17 marshal the monetary and non-monetary resources needed to successfully pursue post-secondary education.

Data release - October 29, 2014

Description

Statistics Canada was approached by Employment and Social Development Canada (ESDC) to conduct a cross-sectional survey which would examine how Canadians are preparing their children for post-secondary education.

Parents/guardians can participate in several ways. They can pro-actively plan for the financing of their children's post-secondary education by putting aside savings for that purpose and by actively participating in government sponsored mechanisms that facilitate savings for post-secondary education (i.e., Registered Education Savings Plans (RESPs), Canada Education Savings Grants (CESGs)). They can also prepare in a non-monetary fashion by encouraging, guiding and supporting their children through their early education, thereby laying the groundwork for successful participation in post-secondary education.

The primary objective of the Survey of Approaches to Educational Planning (SAEP) is to improve our understanding of the processes by which the parents/guardians of children aged 0-17 marshal the monetary and non-monetary resources needed to successfully pursue post-secondary education. These include financial saving strategies, parents/guardians' attitudes and values in respect to post-secondary education the child's demonstration of commitment to education through academic performance and extra-curricular involvement.

Subjects

  • Education finance
  • Education, training and learning

Data sources and methodology

Target population

The target population consists of children between the ages of 0 and 17 in the ten provinces.

Instrument design

SAEP was conducted in October and November 2013 as a supplement to the Labour Force Survey. The SAEP questionnaire consists of a subset of questions used in SAEP 2002. The implementation of the 2002 questionnaire followed an extensive reassessment of data requirements, input consultations with academic experts and the client (Employment and Social Development Canada), questionnaire development and questionnaire testing by Questionnaire Development Resource Centre (QDRC). Questionnaire testing by QDRC was done via focus groups in both official languages.

Sampling

This is a sample survey with a cross-sectional design.

The SAEP was administered to a sub-sample of the dwellings in the October and November 2013 Labour Force Survey (LFS) sample, and therefore its sample design is closely tied to that of the LFS. Because the SAEP was a supplement to the LFS, the same frame was used.

The LFS is a monthly household survey whose sample of individuals is representative of the civilian, non-institutionalized population 15 years of age or older in Canada's ten provinces. Specifically excluded from the survey's coverage are residents of the Yukon, Northwest Territories and Nunavut, persons living on Indian Reserves, full-time members of the Canadian Armed Forces and inmates of institutions. These groups together represent an exclusion of approximately 2% of the population aged 15 or over.

The LFS follows a rotating panel sample design, in which households remain in the sample for six consecutive months. The total sample consists of six representative sub-samples or panels, and each month a panel is replaced after completing its six month stay in the survey.

The SAEP used three of the six rotation groups in the October LFS sample and two of the six rotation groups in the November LFS sample. For the SAEP, the coverage of the LFS was modified to include only those households with at least one child aged 17 and under and, within those households, only one randomly selected child.

Data sources

Data collection for this reference period: 2013-10-20 to 2013-12-07

Responding to this survey is voluntary.

Data are collected directly from survey respondents and derived from other Statistics Canada surveys.

The SAEP was collected by way of Computer Assisted Telephone Interviewing (CATI) and Computer Assisted Personal Interviewing (CAPI). The SAEP collected information from the person most knowledgeable (PMK) about the selected child's education. If the PMK was not available, the interviewer arranged for a convenient time to phone back. Proxy response was not allowed.

View the Questionnaire(s) and reporting guide(s) .

Error detection

Some editing was done directly at the time of the interview using the computer assisted program. Where the information entered was out of range (too large or small) of expected values, or inconsistent with previous entries, the interviewer was prompted, through message screens on the computer, to modify the information. However, interviewers had the option of bypassing the edits, and of skipping questions if the respondent did not know the answer or refused to answer. Therefore, the response data were subject to further edit processes once they arrive in head office.

At the head office, the first type of errors that were searched for were questionnaire flow errors. These errors happen when questions that did not apply to respondents were answered when they should not have. The responses are removed and replaced with a valid skip code. Questionnaire flow errors can also happen when the respondent was not asked questions that she/he should have been asked. For this type of questionnaire flow error a "not stated" code was assigned to these unanswered questions.

Further editing phases of processing involved the identification of logically inconsistent items and the modification of such conditions. Since the true value of each entry on the questionnaire was not known, the identification of errors could be done only through recognition of obvious inconsistencies. If a value was suspicious but reasonable, the erroneous value will have found its way into the survey statistics.

Where errors were detected, the erroneous items were either replaced by logically consistent values or not stated value. These changes were based on pre-specified criteria and involve the internal logic of the questionnaire. In order to make the changes, logic tables were developed and programmed and run on all the survey data to ensure that all the changes were done consistently and automatically.

Imputation

Total non-response was handled by adjusting the weight of households who responded to the survey to compensate for those who did not respond.

In most cases, partial non-response to the survey occurred when the respondent did not understand or misinterpreted a question, refused to answer a question, or could not recall the requested

For the 2013 SAEP, donor imputation was used to fill missing data in household income and registered educational savings plans (RESP) values. This was done in order to provide complete data, thereby allowing for totals to be estimated (e.g., total group RESP contributions in Ontario).

Because the RESP value depends on previous questions (lead-ins), missing values in the lead-ins were imputed first. The lead-ins ask whether there are (or will be) savings and, if so, whether these savings are for the post-secondary education of children aged 0-17.

Imputation involved filling the missing values in household income, RESP value and the lead-ins on a given record (the "recipient" record) using another record whose values were all known and whose characteristics were the "closest" (the "donor" record). The characteristics of each recipient were compared to those of each donor in a pool of donors. When a characteristic between the recipient and a donor were the same, the weight (value) of that characteristic was added to a "score" for that donor. In the end, the donor with the highest score was deemed to be the closest, and was therefore chosen to fill the missing value(s) in the recipient. If there was more than one donor with the highest score, one donor was randomly selected. The pool of donors was made up in such a way that the imputed value assigned to the recipient, in conjunction with other non-imputed items from the recipient, would still pass the edits.

Donor imputation was done in three steps. First, household income was imputed. This is partly because household income is an important factor in the donor score when imputing key items. Second, the RESP value and the corresponding lead-ins were imputed. These variables were imputed simultaneously for consistency and coherence.

Estimation

The principle behind estimation in a probability sample such as the LFS is that each person in the sample "represents", besides himself or herself, several other persons not in the sample. For example, in a simple random 2% sample of a population of 2,500 persons, each person in the sample represents 50 persons in the population.

The weighting phase is a step which calculates, for each record, what this number is. This weight appears on the microdata file, and must be used to derive meaningful estimates from the survey. For example, if the number of children whose parents/guardians have set aside savings for post-secondary education is to be estimated, it is done by selecting the records referring to those individuals in the sample with that characteristic and summing the weights entered on those records.

The principles behind the calculation of the weights for the SAEP are identical to those for the LFS. However, 5 adjustments are made to the LFS sub-weights in order to derive a final weight for the individual records on the SAEP microdata file:

1) An adjustment to account for the use of a five-sixth sub-sample, instead of the full LFS sample.

2) An adjustment to account for the additional non-response to the supplementary survey i.e., non-response to the SAEP for households with at least one child aged 0 to 17 years which did respond to the LFS or for which the previous month's LFS data was brought forward. The procedure is similar to the LFS non-response weight adjustment, but the groupings are based on province, employment insurance economic region, rotation group, design type, urban area versus rural area, census metropolitan area versus non-census metropolitan area, type of dwelling, economic family type, household size, and parent/guardian characteristics such as education, labour force status and occupation. Since households without children are out-of-scope (and therefore not selected into the SAEP), their weights are not affected by this step.

3) An adjustment for the total number of households (i.e., those with or without at least one child aged 0 to 17 years) by household size (1, 2 and 3+ people) and by province, according to independent estimates.

4) An adjustment to account for the random selection of one child from the selected household. In particular, the weight of the selected child is multiplied by the number of children in the household, up to a maximum (cap) of four children.

5) An adjustment for the number of children by province, sex, and age group (i.e., 0 to 5, 6 to 12, 13 to 15 and 16 to 18 years), according to independent estimates.

The resulting weight, WTPM, is the final weight which appears on the SAEP master file.

Quality evaluation

Data were confronted with other published sources such as the Access and Support to Education and Training Survey (record number 5151) and the 2002 Survey of Approaches to Educational Planning.

For the CANSIM tables, the data were released under the coefficient of variation release guidelines. The quality level of an estimate will be determined only on the basis of sampling error as reflected by the coefficient of variation.

Disclosure control

Statistics Canada is prohibited by law from releasing any information it collects that could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.

In order to prevent any data disclosure, confidentiality analysis is done using the Statistics Canada Generalized Disclosure Control System (G-Confid). G-Confid is used for primary suppression (direct disclosure) as well as for secondary suppression (residual disclosure). Direct disclosure occurs when the value in a tabulation cell is composed of or dominated by few enterprises while residual disclosure occurs when confidential information can be derived indirectly by piecing together information from different sources or data series.

Revisions and seasonal adjustment

This methodology does not apply to this survey.

Data accuracy

RESPONSE RATE:
The response rate for 2013 SAEP was 79.7%.

SAMPLING ERRORS:
Sampling error occurs because population estimates are calculated from a sample of the population rather than the entire population. Sampling error depends on factors such as sample size, sampling design, and the method of estimation.

The basis for measuring the potential size of sampling errors is the standard error of the estimates calculated from survey data. Because of the large variety of estimates that can be produced from a survey, the standard error of an estimate is usually expressed relative to the estimate to which it pertains. This resulting measure, known as the coefficient of variation (CV) of an estimate, is obtained by dividing the standard error of the estimate by the estimate itself and is expressed as a percentage of the estimate.

Users should determine the coefficient of variation of the estimate in order to get an indication of the quality of the estimates. For instance, if the coefficient of variation is less than 16%, the estimates can be used without any restriction; if it is between 16% and 33%, the estimates should be used with caution; and, if it is 33% or more, the estimates cannot be released in any form under any release.

In addition, non-sampling errors may occur at almost every phase of the survey operation. Considerable time and effort was made to reduce non-sampling errors in the survey. Quality assurance measures were implemented at each step of the data collection and processing cycle to monitor the quality of the data.

Date modified: