Aboriginal Peoples Survey (APS)

Detailed information for 2012




Every 5 years

Record number:


The purpose of the Aboriginal Peoples Survey (APS) is to provide data on the social and economic conditions of Aboriginal people in Canada. More specifically, its purpose is to identify the needs of Aboriginal people and focus on issues such as education, employment, health, language, income, housing and mobility.

Data release - November 25, 2013


The 2012 Aboriginal Peoples Survey (APS) is a national survey of First Nations people living off reserve, Métis and Inuit aged six years and over. The 2012 APS represents the fourth cycle of the survey and focuses on the topics of education, employment and health. It also collects information on language, income, housing and mobility.

The 2012 APS collects unique and detailed data on education, employment and health, data which are not available from any other source. For example, although the 2011 National Household Survey collected data on level of education and on major field of study, the 2012 APS addresses additional topics such as number of schools attended, exposure to Aboriginal languages, school climate and support, frequency of reading, participation in extra-curricular activities, peer influences and plans for further schooling.

The APS provides key statistics to inform policy and programming activities aimed at improving the well-being of Aboriginal Peoples. It is a valuable source of information for a variety of stakeholders including Aboriginal organizations, communities, service providers, researchers, governments and the general public.

The survey was carried out by Statistics Canada with funding provided by three federal departments: Aboriginal Affairs and Northern Development Canada, Health Canada and Employment and Social Development Canada (formerly called Human Resources and Skills Development Canada).


  • Aboriginal peoples
  • Aboriginal society and community
  • Health and well-being
  • Work, income and spending

Data sources and methodology

Target population

The target population of the 2012 Aboriginal Peoples Survey (APS) was composed of the Aboriginal identity population of Canada, 6 years of age and over as of February 1, 2012, living in private dwellings excluding people living on Indian reserves and settlements and in certain First Nations communities in Yukon and the Northwest Territories (NWT). The concept of "Aboriginal identity" refers to those persons who reported identifying with at least one Aboriginal group, namely, First Nations (North American Indian), Métis or Inuit, those who reported being a Status Indian (Registered Indian or Treaty Indian, as defined by the Indian Act of Canada), or those who reported being a member of a First Nation or Indian band.

The APS selected its sample from reported answers to the 2011 National Household Survey (NHS) questionnaire. More precisely, the APS sample was selected from individuals who answered "Yes" to either one of the three NHS questions defining the identity population (questions 18, 20 and 21) or those who reported Aboriginal ancestry to question 17. Individuals with Aboriginal ancestry who did not report Aboriginal identity are defined as the "Aboriginal ancestry-only population". Although, in contrast to the 2006 APS, the ancestry-only population was not part of the 2012 APS target population, it was still sampled because it was noted that slightly less than one-third of the ancestry-only population based on the 2006 Census long form reported identity on the 2006 APS. Therefore, unlike the target population, the sampled population (or survey population) was composed of both the identity population and the Aboriginal ancestry-only population, which together form the "total Aboriginal population".

Instrument design

Although the 2012 APS was designed to be thematic, it is based on previous cycles of the APS which were developed in collaboration with the national Aboriginal organizations. Following the release of data from the 2006 APS, a content review was conducted to ensure the future relevance of existing APS questions to key stakeholders and to identify any potential data gaps. The review brought together expertise from a diverse group of researchers and subject matter experts, both from within and outside of Statistics Canada. New survey questions were developed and added to the 2012 APS questionnaire in order to place greater emphasis on the themes of education and employment, in particular.

Prior to 2012, the APS used a paper questionnaire format. The questions in the 2012 APS were designed for use in a Computer Assisted Interviewing (CAI) environment which incorporates many features that serve to maximize the efficiency and quality of data collection. CAI allows for more complex questionnaire flows as well as on-line edits which identify any logical inconsistencies so that interviewers can correct these with the assistance of respondents at the time of the interview. Two computer assisted interview questionnaires were developed for this survey: a Computer Assisted Telephone Interview (CATI) and a Computer Assisted Personal Interview (CAPI).

Qualitative testing of the survey questionnaire was carried out by Statistics Canada's Questionnaire Design Resource Centre (QDRC) with the help of First Nations people, Métis and Inuit across Canada. Adjustments were made to question wording and flows based on those results. Question wording adheres as closely as possibly to questions established by the Harmonized Content Committee at Statistics Canada. This increases opportunities to compare responses between Statistics Canada surveys.


This is a sample survey with a cross-sectional design.


The APS is a sample survey with a cross-sectional design. The APS sample was selected from NHS respondents who reported an Aboriginal identity or ancestry (see target population). These NHS respondents make up the APS frame. The sampling unit is the individual.


An important part of stratification uses the survey's domains of estimation, which are groups of units for which estimates are targeted. These domains of estimation correspond to geographical regions for which estimates with an "acceptable" level of precision for a particular Aboriginal group (i.e., First Nations, Métis or Inuit) and a particular education group are targeted. An example of a domain of estimation would be Métis in Alberta attending elementary school. For each domain of estimation, the goal is to estimate one characteristic present for a particular minimum proportion with a certain degree of precision.

Stratification will produce more precise estimates if units are homogeneous within strata and heterogeneous between strata. Having reported Aboriginal identity or Aboriginal ancestry only on the NHS is a very important stratification factor. It is also desirable for the weights of the NHS respondents (NHS weights) to vary as little as possible within an APS stratum. To this end, part of the APS stratification comes from the NHS stratification. Consequently, the stratification variables that produce the greatest variation possible in the NHS weights between strata were used. In remote areas and on reserves, the N2 form of the NHS was distributed to all households (N2 regions). In other parts of Canada, the N1 form of the NHS was distributed to about one in three households (N1 regions). In addition, in N1 regions, a non-response follow-up (NRFU) operation was carried out for a subsample of about one-third of the households that had not yet responded on July 14, 2011; roughly 40% of the households in the subsample responded (NRFU respondents). Thus, the NRFU respondents will have NHS weights that are on average 7.5 times higher than those of pre-July 14 respondents (initial respondents). A more detailed description of the NHS design is available in Chapter 3 of the "National Household Survey User Guide" (Catalogue no. 99-001-x2011001). Hence, region type (N1 or N2) and respondent type (initial respondent or NRFU respondent) were used as additional stratification variables.

The APS design can be considered a three-phase design in which the first two phases correspond to the selection of the NHS sample and the third phase corresponds to the selection of the APS sample.


A method for optimal allocation between the substrata of a particular domain was used, taking into account different types of sample size loss, such as expected non-response and the probability of each unit belonging to the target population.


More than 50,000 individuals with Aboriginal identity or Aboriginal ancestry in the NHS were sampled.

Data sources

Data collection for this reference period: 2012-02-06 to 2012-07-31

Responding to this survey is voluntary.

Data are collected directly from survey respondents and derived from other Statistics Canada surveys.

Questions were administered in Computer Assisted Telephone Interviews (CATI) and Computer Assisted Personal Interviews (CAPI). In most regions, CATI was used for individuals for whom there was a telephone number on the sample file. CAPI was used for individuals who did not have a telephone number or who could not be contacted by telephone even when a number was available. In the territories, the northern parts of many provinces and some Inuit communities, there were often very few telephone numbers available. In these cases, personal interviews were conducted.

Respondents were interviewed in the official language of their choice. For Inuit regions, the questionnaire was translated as a paper copy into Inuktitut (Baffin dialect) and an Inuktitut audio recording of the questionnaire was made to assist interviewers with potential language barriers in the field.

The time required to complete the survey varied from person to person, however, on average the survey took about 40 minutes to complete.

Proxy reporting was used for most children aged 6 to 14 years, nearly half of youth aged 15 to 17 years and for adults in certain specific situations (for example when the selected adult was not able to answer for health related reasons, due to a language barrier, or because the selected respondent was going to be away from home for the duration of the survey.)

More than 50,000 individuals were selected to participate to the 2012 APS. Off those, approximately 38,150 individuals completed the APS questionnaire for a response rate of 76%. Excluding approximately 9,740 non-Aboriginal respondents, the total number of Aboriginal respondents included in the 2012 APS database is about 28,410.

The 2012 APS sample was drawn from respondents who reported either Aboriginal identity or Aboriginal ancestry in the 2011 NHS. APS respondents were told that Statistics Canada planned to combine their APS and NHS responses. Accordingly, the final edited Aboriginal Peoples Survey master microdata file was linked with the 2011 National Household Survey Dissemination Database. In the end, more than 100 NHS variables were added to the final APS file for 2012.

The specific benefits of an APS-NHS record linkage are reduced response burden for the target population of the APS, the derivation of survey weights which are crucial to providing valid estimates, and the creation of a comprehensive microdata file which can be used by data analysts to extend their learning and to inform policy and program development for Aboriginal peoples in Canada.

All products containing linked data are disseminated in accordance with Statistics Canada's policies, guidelines and standards. Only aggregate statistical estimates that conform to the confidentially provisions of the Statistics Act are released.

View the Questionnaire(s) and reporting guide(s) .

Error detection

Responses to the 2012 Aboriginal Peoples Survey (APS) were captured directly by the interviewer at the time of the interview using a computerized questionnaire. In many cases when a particular response appeared to be inconsistent with previous answers or outside of expected values, the interviewer was prompted, through message screens on the computer, to confirm answers with the respondent, and, if needed, to modify the information directly at the time of interview. This editing, however, was conducted only with errors that were fairly simple and straightforward to detect and fix. These edits were applied at the micro level.

Data were then subjected to further editing processes once they arrived in head office, in order to correct errors that required more complex edit rules. Customized edits consisted of validity checks within and across variables to identify gaps, inconsistencies, and other problems in the data and corrections were performed based on logical edit rules. Editing at this stage was also applied at the micro level, using SAS (Statistical Analysis System).

For more information on data processing in the 2012 APS, please refer to the "Aboriginal Peoples Survey, 2012: Concepts and Methods Guide", Catalogue number 89-653-XWE2013002.


In specific situations, deterministic imputation was used to correct data which were missing or invalid in the 2012 APS. Usually, data were imputed based on the respondent's answers to other related questions on the APS. However, if that method of imputation was not possible, corresponding data were imputed instead from the respondent's answers as provided in the NHS. All imputations were done at the micro level, using SAS.

A series of important imputations was conducted in relation to the Aboriginal identity questions in the 2012 APS. Persons with missing data for questions ID_Q02 on Aboriginal identity group, ID_Q03 on registered Indian status, or ID_Q05 on membership in a First Nation or Indian band had values imputed based on, where possible, other Aboriginal identity questions on the APS, or on their responses to the NHS. For instance, those who self-reported as Aboriginal in APS question ID_Q01 but who did not report any specific Aboriginal group in ID_Q02 were imputed to First Nations (North American Indian) if they had a positive response to any later question indicating that they were (1) Status Indian, (2) registered as a Status Indian under Bill C-31 or Bill C-3, and/or (3) a member of a First Nation or Indian band. However, if the respondent had not reported any of those responses in the APS, then data for ID_Q02 were imputed for them based on their answers in the NHS.

In addition, all in-scope Aboriginal respondents who did not report being Aboriginal in APS question ID_Q01 but indicated to later questions that they were (1) Status Indian, and/or (2) registered as a Status Indian under Bill C-31 or Bill C-3, and/or (3) a member of a First Nation or Indian band were imputed as being Aboriginal in ID_Q01 and imputed as being First Nations in ID_Q02.


The initial weight of a unit in a given APS stratum corresponds to the product of two components: the inverse of the stratum sampling fraction and the NHS weight corrected for non-response for the unit in question. The stratum sampling fraction is calculated as the number of people selected for the APS in each stratum divided by the total number of available NHS respondents for that stratum. The weights were then adjusted for non-response.

Two adjustments were made for two types of non-response: non-contact and non-response with contact (mainly refusals). Separate adjustments were also made for NHS respondents under the age of 15 (children) and NHS respondents aged 15 and over (adults). For children, the characteristics of the adult member of the household most likely to respond for the child were used. In the case of children, the response is provided by the parent or guardian, not by the child. First, a logistic regression model was constructed for each adjustment to predict the probabilities of being contacted or of responding when contacted on the basis of NHS variables and collection variables known as "paradata" (number of attempts, for example). Second, respondents and non-respondents with similar predicted response probabilities were assigned to adjustment classes using cluster analysis. Third, the inverse of the weighted response rate in a class was used as the adjustment factor for that class, and the weights of the respondent units within the class were adjusted accordingly.

Next, two post-stratification adjustments were made. The first post-stratification ensures that the sample does not under represent or over represent certain combinations of Aboriginal groups, regions and age groups of the NHS. The second post-stratification ensures that the Aboriginal identity population estimated from the APS screening questions corresponds to the population defined from the NHS screening questions within each post-stratum defined by the cross-tabulation of region, Aboriginal identity group and age group.

Lastly, the Sigma-gap method was used to detect and reduce excessively large weights within each post-stratum. After the weights were sorted in descending order, excessively large weights were reduced to the value of the first non-outlier weight. The mass of the reduced weights was then redistributed proportionally within the post-strata.

The bootstrap method was used to calculate the variance. No existing method was appropriate for the 2012 APS sampling design. FOR THE SOLE PURPOSE OF CALCULATING THE VARIANCE, the NHS design in N1 regions was treated as a three-phase design. First, a sample of one-third of the households was selected in the N1 regions. The sub sampling of non-respondents for non-response follow-up (NRFU) in the N1 regions formed the second phase of sampling. The NRFU respondents were then treated as a third phase of sampling. To simplify the problem, the various phases of the NHS were combined into a single phase. The APS sampling was treated as a second phase, and then the general bootstrap method for two-phase sampling developed for the 2006 APS was used (see Langlet, É., Beaumont, J.-F., and Lavallée, P. 2008. "Bootstrap Methods for Two-Phase Sampling Applicable to Postcensal Surveys". Paper submitted to Statistics Canada's Advisory Committee on Statistical Methods, May 2008, Ottawa).

Quality evaluation


Between the 2006 APS and the 2012 APS, there were a number of major changes relating not only to survey content, but also to methodology. Because of these changes, comparing population estimates between the two surveys is not recommended. However, proportions between the two survey cycles can be compared (for example, the proportions of high school graduates in a specific age group can be compared, as can be the proportions of high school leavers).

The most significant methodological difference between the 2012 APS and the 2006 APS is that in 2012, the Aboriginal ancestry-only population was no longer part of the survey's target population, as it was in 2006. As a result, two different post-stratification strategies were used in 2006 and 2012, making it inadvisable to compare the Aboriginal identity population estimates between the two survey cycles.

Another important difference in methodology is the fact that the 2006 APS sample was selected from respondents to the 2006 Census, while the 2012 APS sample was selected from respondents to the 2011 NHS. The characteristics of respondents to the NHS may be different than those of respondents to a census. The fact that non-respondents have different characteristics than respondents creates what is called non-response bias. Despite the fact that the NHS used follow-up strategies and non-response adjustment strategies at weighting to reduce this bias, it is possible that some non-response bias still remains.

One difference in content is that the question on Aboriginal self-reporting was divided in two in the 2012 APS questionnaire. Also, this question was not preceded by the three questions on Aboriginal ancestry (three questions in one), as it was in 2006. This may have resulted in differences in terms of how individuals responded to the Aboriginal self-reporting question.


In general, the Aboriginal identity population counts on the 2012 APS for certain subpopulations may differ from those obtained from the NHS, even if the population universe for the NHS is restricted to that of the APS.

The second APS post-stratification ensured that the number of individuals with Aboriginal identity was the same in the NHS and the APS, for certain combinations of Aboriginal group, region and age group. However, the Aboriginal identity population counts may differ for other subpopulations which were not controlled for during post-stratification. Moreover, for a given individual, the Aboriginal identity reported may differ in some cases between the NHS and the APS.

There are a number of reasons why Aboriginal identity may differ between the two surveys: 1) different interview methods and the impact of proxy reporting (information for the same respondent was not necessarily given by the same person in the two surveys); 2) different questionnaires; 3) different contexts; 4) effect of time; 5) different data processing procedures. For more information, please see Chapter 8 of "Aboriginal Peoples Survey, 2012: Concepts and Methods Guide" (Catalogue no. 89-653-XWE2013002).

Disclosure control

Statistics Canada is prohibited by law from releasing any information it collects which could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.

Data based on a count of fewer than 10 respondents are suppressed to ensure confidentiality of respondents. To further reduce risk of disclosures, all estimates are rounded to the nearest 10 units.

Revisions and seasonal adjustment

This methodology type does not apply to this survey.

Data accuracy

Two types of errors occur in surveys: sampling errors and non-sampling errors.


The sampling error measure used for the Aboriginal Peoples Survey (APS) is the coefficient of variation (CV) of the estimate, which is the standard error of the estimate divided by the estimate itself. In this survey, when the CV of an estimate is less than or equal to 16.6%, the estimate can be used without restriction. When the CV is greater than 16.6% but less than or equal to 33.3%, the estimate will be accompanied by the letter "E" to indicate that the data should be used with caution. When the CV of an estimate is greater than 33.3%, the cell estimate will be replaced by the letter "F" to indicate that the estimate was suppressed for reliability reasons.


Non-sampling errors arise primarily from the following sources: non-response, coverage, measurement and processing.

Non-response will produce a bias if non-respondents have different characteristics from respondents and if non-response is not corrected properly. Generally, the extent of item non-response was relatively small in the APS. Extensive qualitative reviews and testing of questionnaire was done prior to the survey, hence reducing the extent of item non-response. Furthermore, non-response weighting adjustments, combined with a relatively high response rate (76%), helped to substantially reduce the risk of bias.

Because the APS sample was selected from those who had participated in the NHS, individuals who did not participate in the NHS could not be sampled for the APS (the NHS had an unweighted response rate of 68.6%). As such, non-response bias in the NHS could translate to coverage bias in the APS (although technically, this could also be considered as a non-response bias for APS). Statistics Canada conducted several studies, before and after NHS data collection, to assess the risk and extent of the potential non-response bias in the NHS. A number of measures were taken to mitigate its effects. Namely, particular non-response follow-up procedures were used to reduce the potential bias for populations at risk such as the Aboriginal population. Particular weighting strategies were also used to reduce this bias. For a full discussion of data quality for the NHS, please refer to the National Household Survey User Guide (Catalogue no. 99-001-x2011001).

Measurement errors occur when the response provided differs from the real value. Such errors may be attributable to the respondent, the interviewer, the questionnaire or the collection method, for example. For the 2012 APS, every effort was made to develop questions that would be understandable, relevant and appropriate for respondents. Other measures were also taken, including the use of skilled interviewers, extensive training of interviewers, and observation and monitoring of interviewers.

Processing errors may occur at various stages, including data capture, coding and editing. Quality control procedures were applied at every stage of data processing to reduce this type of error. Compared with the 2006 APS, processing errors in 2012 were reduced substantially, as the pencil-and-paper collection method used in 2006 was replaced by computer-assisted interviewing.


Date modified: