Programme for the International Assessment of Adult Competencies (PIAAC)

Detailed information for 2022

Status:

Active

Frequency:

Occasional

Record number:

4406

The Programme for the International Assessment of Adult Competencies (PIAAC) is a multi-cycle international program and an initiative of the Organisation for Economic Co-operation and Development (OECD). It aims to collect information on the competencies of adults from several countries, including Canada.

Data release - December 9, 2024

Description

The Programme for the International Assessment of Adult Competencies (PIAAC) assesses the proficiency of adults aged 16 to 65 in information-processing skills essential for full participation in the economy and society, namely literacy, numeracy and adaptive problem solving. It has been designed to gain insight into the abilities of adults, such as reading, finding information, using computers and technology, and to learn about their education and work experience. Results from the study will be used to plan programs, and to compare Canada to other countries who are conducting a similar study.

In Canada, PIAAC was conducted by Statistics Canada in partnership with Employment and Social Development Canada (ESDC) and the Council of Ministers of Education, Canada (CMEC). The last survey took place in 2012.

This cycle of PIAAC evolved from Cycle 1 which was held in 2012, as well as from two previous international literacy surveys: the International Adult Literacy Survey, conducted between 1994 and 1998, and the Adult Literacy and Lifeskills Survey, conducted between 2003 and 2008. PIAAC seeks to ensure continuity with these previous surveys, to provide information regarding change in the distribution of skills over the years, and to expand the skills being measured in the context of our current society. Users of the data include federal and provincial governments, international partners, academics, literacy and skills development professionals, media and interested members of the public. The data are used to inform policy decisions, help effectively allocate resources where needed and inform decisions on the content of skills development courses and adult education programs.

Reference period: Calendar year

Subjects

  • Adult education and training
  • Education, training and learning
  • Job training and educational attainment
  • Labour
  • Literacy
  • Wages, salaries and other earnings

Data sources and methodology

Target population

The target population consists of all Canadian residents aged 16 to 65 living in the ten provinces. Persons living on reserves and other Indigenous settlements, full-time members of the Canadian Forces living on military bases and the institutionalized population are excluded from the survey's coverage. The population that was covered by the survey excluded residents of sparsely populated regions. All of these exclusions combined represent 3% of the whole population of Canadian adults aged 16 to 65.

Instrument design

The survey questionnaire and psychometric items were designed by a group of international experts led by the PIAAC international consortium. Each task item was originally designed in English and constitutes the master international item that was to be adapted by each of the participating countries into their own language (English and French in Canada). The content of the questionnaire and the exercises from Cycle 1 held in 2012 were taken into account in the Cycle 2 instrument design to ensure comparability of the data collected. All of the collection instruments were tested thoroughly. The final psychometric instrument was created using items that provided the most reliable and stable parameters in all three domains (literacy, numeracy and adaptive problem solving). In addition, many of the background questions and a selection of literacy and numeracy domain tasks given in PIAAC trace their origins to the 2003 Adult Literacy and Life Skills Survey or the 1994 Adult Literacy Survey. This was done in order to provide a psychometric link that would allow comparisons of skill distribution over time.

The survey instrument included an Entry component, followed by the background questionnaire, and an exercise on a tablet. The survey was administered in the respondent's home by a Statistics Canada interviewer.

The Entry component was designed to gather demographic information for each member of the household. Once this information was collected, a respondent was selected from the eligible members of the household.

The background questionnaire was administered to all respondents by computer assisted personal interview (CAPI). It aimed to collect information such as age, gender, immigrant status, formal and informal education and training, linguistic information, parental education and occupation, current work status and history, current occupation, industry and earnings, personal traits and attributes (social and emotional skills), literacy, numeracy and technology skills used at work and at home.

The skills assessment component was completed by the respondent using a tablet and assessed literacy, numeracy and adaptive problem solving. The exercise also included the administration of reading and numeracy components.

Sampling

This is a sample survey with a cross-sectional design.

Frame
The response database of the 2021 Census of Population long-form questionnaire was used as the sampling frame to construct the PIAAC sample.

Sample Design
Sample selection occurred in up to three stages.

In the first stage, geographical clusters were selected. These clusters were previously stratified into urban and rural strata. Subsequently, households were selected within each selected cluster. Then within each selected household, one individual was chosen to participate in this survey.

Sampling unit
In rural or small urban areas, small contiguous geographical areas, called clusters, were the sampling units at the first stage. The sampling unit at the second stage was the dwelling and at the third stage, the sampling unit was the person. In major urban areas, a two-stage design was used, where dwellings were the sampling unit at the first stage and persons from the roster were the sampling units at the second stage.

Stratification method
In all provinces except Alberta, the frame was stratified into urban and rural or small urban areas. In Alberta, strata were formed based on urban-rural classification and boundaries of the economic regions in the province.

Sampling
The selection of clusters and households was done by systematic probability proportional to size sampling. Within a household, one individual was selected at random amongst the eligible members.

Data sources

Data collection for this reference period: 2022-09-02 to 2023-07-31

Responding to this survey is voluntary.

Data are collected directly from survey respondents.

In Canada, the name "International Study of Adults (ISA)" was used during collection.

The interview is composed of four main components:
- The Entry component
- The background questionnaire component
- The skills assessment component
- The post-interview questions

The Entry and the background questionnaire were administered by an interviewer in the respondent's household using a computer assisted interview (CAI). The skill assessment component was auto-administered. The post-interview questions were a series of questions answered by the interviewer about the interview.

The Entry component could be answered by any member of the household. First, the address was confirmed to assure that the household was really the one selected. A list of household members was collected, and all eligible members were identified. Then, one of the eligible household members was randomly selected to complete the rest of the interview. Proxy interviews were not accepted.

Doorstep interview: To minimize literacy-related non-response, a doorstep interview was introduced. If the selected respondent could speak neither English nor French, and there was no interpreter available to translate the questions and answers in the background questionnaire, the interviewer used a short questionnaire in a third language. This questionnaire included six questions and was available in 38 languages.

View the Questionnaire(s) and reporting guide(s) .

Error detection

PIAAC data were collected using a computer-assisted survey application. As such, many of the error detection and editing took place during collection. Validation of values outside specified ranges was performed by the interviewer whenever they were flagged by the electronic questionnaire and the application automatically directed the flow of the questionnaire based on pre-arranged logic and the respondent's previous answers.

Once the data were collected and transmitted to head office, three phases of error detection were initiated. The first was a general clean-up of the data to accomplish the following goals: 1) remove duplicate records from the file, 2) verify the background questionnaire against the sample file, 3) verify the integrity of the status code, 4) identify missing records, 5) verify that the components are complete and correspond to the status code, and 6) create a response file.

The editing phase of the data processing done by the international consortium included a series of edit steps to be performed. First, a top-down flow edit cleaned up any paths that may have been mistakenly followed during the interview. This step was followed by consistency edits for certain key variables. This step assured consistency between variables such as age of respondent, age of immigration, number of years of formal education, and age when the respondent completed the highest level of education. In addition, the data were validated to identify outlying values.

Imputation

Imputation was minimal. For the purpose of the weighting, there were four records where age was imputed with a donor. A household approximation method was used to impute education level and immigration status for 6.5% of records and to impute household composition for 2.5% of records.

Estimation

Estimates are produced using weights attached to each sampled unit. The weight of a sampled unit indicates the number of units in the population that the unit represents. The weights were calculated in several steps:

1) An initial weight was calculated as the inverse of the probability of selecting a unit in the sample. The overall probability of selecting a given unit was equal to the product of its probabilities of being selected at each phase and at each stage of the selection process.

2) If additional dwellings were identified during collection at a selected household, then all dwellings were contacted. These multiple units were given the same weights as the original dwelling.

3) Subsamples for the Indigenous and youth populations were initially selected. However, due to challenges during collection, the main sample was prioritized and collection for the additional samples was halted. At that time, 424 cases from the subsamples had completed the background questionnaire. A method that is similar to the sample matching technique was implemented to match the respondents from the subsamples to the non-respondents in the general sample. For estimation purposes, the weights were calculated based on the selection probability of the units in the general sample.

4) The weights were adjusted to account for non-response at the household and person level. This process consisted in distributing the weights of the non-responding units to the weights of the responding units. It was conducted in four steps that took into account the information available about the eligibility status of the non-responding households, whether non-response was related to literacy or not, and the presence of a disability preventing participation to the survey.

5) Finally, weights have been post-stratified based on population counts of rural and urban areas by province and of Indigenous status by province and then calibrated for province, age group, sex, immigration status and highest level of education so that some of the totals estimates would match population control totals.

The quality of the estimates was assessed using estimates of their coefficient of variation (CV). Given the complexity of the PIAAC design, CVs could not be calculated using a simple formula. Balanced repeated replicate (BRR) weights were used to establish the CVs of the estimates.

Quality evaluation

In order to obtain data of high quality, international guidelines and standards for the administration of surveys were followed, complemented by strict adherence to Statistics Canada's internal policies and procedures.

The interviews were conducted in homes in a neutral, non-pressured manner. Interviewer training and supervision were provided, emphasizing the importance of precautions against non-response bias. Interviewers were specifically instructed to return several times to non-respondent households to obtain as many responses as possible. Their work was supervised using frequent quality checks, especially at the outset of data collection. About 5% of the interviews of each interviewer were validated by a senior interviewer to ensure data quality. Key indicators from PIAAC collection Dashboard were reviewed frequently during collection, in conjunction with the validation of the interviews.

As a condition of participation in the international study, it was required to process the files using methods and procedures following the consortium's guidelines and ensuring logical consistency and acceptable levels of data error.

Scoring of the psychometric items in the Computer Based Assessment (CBA) was done automatically. Automatic scoring ensured higher quality and has been subject to quality control during the adaptation of the survey instruments. In addition, it allowed information to be captured and readjusted after the psychometric analysis of each of the survey items.

Coding
Other than the standard quality control practices performed by the Operations and Integration Division in Statistics Canada for all coding, we carried out a number of additional quality control measures mandated by the International PIAAC Consortium. This was to ensure that the coding was of acceptable quality that it was done consistently within and across countries.

These procedures included the verification by another coder of 50% of the manual coding of the data of the international classifications of occupations and industry sectors and 20% of the manual coding of the data of the national classifications. In addition, the quality of the coded data was validated against the distribution of other recent and similar sources of data, and they were comparable.

Disclosure control

Statistics Canada is prohibited by law from releasing any information it collects which could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.

Revisions and seasonal adjustment

This methodology does not apply to this survey.

Data accuracy

While considerable efforts were made to ensure high standards throughout all stages of collection and processing, the resulting estimates are inevitably subject to a certain degree of error. These errors can be broken down into two major types: sampling and non-sampling.

RESPONSE RATE:
The response rate is 28% at the national level, and varies between 24% and 41% at the provincial level.

Coefficient of variation (CV):
Quality of the estimates is measured using estimates of their coefficient of variation (CV). Balanced repeated replicate (BRR) weights are used to calculate the CV of the estimates, as well as the ten plausible values for estimates of proficiency scores.

NON-SAMPLING ERROR
Over a large number of observations, randomly occurring non-sampling errors will have little effect on estimates derived from the survey. However, errors occurring systematically will contribute to biases in the survey estimates. Non-sampling errors in PIAAC can come from a variety of sources. Considerable time and effort were made to reduce non-sampling errors in the survey. Quality assurance measures were implemented at each step of the data collection and processing cycle to monitor the quality of the data. These measures included the use of skilled interviewers, extensive training of interviewers with respect to the survey procedures and questionnaire, observation of interviewers to detect problems of questionnaire design or misunderstanding of instructions, validation of interviews, procedures to ensure that data capture errors were minimized and coding and edit quality checks to verify the processing logic.

NON-RESPONSE BIAS
A source of non-sampling errors in surveys is the effect of non-response on the survey results. The extent of non-response varies from partial non-response (failure to answer just one or some questions) to total non-response. Total non-response occurred when the interviewer was either unable to contact the respondent, no member of the household was able to provide the information, or the respondent refused to participate in the survey. The national non-response rate for the PIAAC was around 72%. Non-response weighting adjustments were performed at the household and at the person level. These adjustments were designed to reduce the non-response bias as much as possible, using, among others, variables that were linked to the response probability, and benefited greatly from the use of the 2021 Census as the survey frame. This was complemented by calibrating the final estimates on known demographic totals. An extended analysis that relied on strong sources such as the 2021 Census, demographic counts, the Labor Force Survey, and the Canadian Community Health Survey, concluded that the non-response bias is not expected to significantly impact the survey estimates. Partial non-response to the survey occurred, in most cases, when the respondent did not understand or misinterpreted a question, refused to answer a question, or could not recall the requested information or complete the skills assessment component. Generally, the extent of partial non-response was small in the PIAAC.

COVERAGE ERROR
The use of the 2021 Census insured that the PIAAC frame was as inclusive as possible and that any exclusions could be effectively taken into consideration into the overall survey design. Coverage of the survey's target population by the 2021 Census of Population was determined to be about 96% at the national level and between 94% and almost 100% at the provincial level.

OTHER NON-SAMPLING ERRORS
A number of other potential sources of non-sampling error that are unique to the PIAAC deserve comment. Firstly, some of the respondents may have found the test portion of the study intimidating and this may have had a negative effect on their performance. Unlike "usual" surveys, the PIAAC test items have "right" and "wrong" answers. Also, for many respondents this would have been their first exposure to a "test" environment in a considerable number of years. Further, although interviewers did not enforce a time limit for answering questions, the reality of having someone watching and waiting may have, in fact, imposed an unintentional time pressure. It is recognized, therefore that even though items were chosen to closely reflect everyday tasks, the test responses might not fully reveal the literacy capabilities of respondents due to the testing environment. Further, the test nature of the study called for respondents to perform the activities completely independently of others and interviewers were trained to make sure these guidelines were followed. However, it could be therefore, that the skills measured by the survey do not reflect the full range of some respondents' abilities in a more natural setting.

Date modified: