Programme for the International Assessment of Adult Competencies (PIAAC)
Detailed information for 2012
Status:
Active
Frequency:
Occasional
Record number:
4406
The Programme for the International Assessment of Adult Competencies (PIAAC) is a multi-cycle international program of assessment of adult skills and competencies initiated by the Organisation for Economic Co-operation and Development (OECD). It aims to collect the information of residents from several countries, including Canada.
Data release - October 8, 2013
Description
The Programme for the International Assessment of Adult Competencies (PIAAC) is a multi-cycle international program of assessment of adult skills and competencies initiated by the Organisation for Economic Co-operation and Development (OECD). It aims to collect the information of residents from several countries, including Canada.
PIAAC evolved from two previous international literacy surveys: the International Adult Literacy Survey (IALS), conducted between 1994 and 1998, and the Adult Literacy and Lifeskills Survey (ALL), conducted between 2002 and 2006. With the first round of data collection, PIAAC seeks to ensure continuity with these previous surveys, to provide information regarding change in the distribution of skills over the years, to extend the skill being measured by including problem solving in technology-rich environments, and to provide more information about individuals with low levels of competency by assessing reading component skills.
Users of the data include federal and provincial governments, academics, literacy and skills development professionals, media and interested members of the public. The data are used to inform policy decisions, help effectively allocate resources where needed and inform decisions on the composition and content of remedial skill development course and adult education.
Reference period: Calendar year
Subjects
- Adult education and training
- Education, training and learning
- Literacy
Data sources and methodology
Target population
The target population consists of Canadian adults aged 16 to 65 not residing in institutions or on Aboriginal reserves. It also excludes families of members of the Armed Forces living on military bases as well as residents of some sparsely populated areas. Once combined, these exclusions represent less than 2% of the whole population of Canadian adults aged 16 to 65 years old, and therefore respect the survey's international requirements.
Instrument design
The survey questionnaire and psychometric items were designed by a group of international experts and lead by the PIAAC international consortium. Each task item was designed in English and constitutes the master international set of items that was to be adapted by each of the participating countries into its own language. All of the instruments were tested in a pilot survey conducted in 2010. The final psychometric instrument was created using the items providing the most reliable and stable parameters in all three domains (literacy, numeracy and problem solving in rich technology enviroment). Many of the background questions and a selection of literacy and numeracy domain tasks asked in PIAAC trace their origins to the 2003 Adult Literacy and Life Skills Survey (ALL) and 1994 Adult Literacy Survey (IALS). This was done in order to provide a psychometric link that would allow comparisons of skill distribution over time.
The survey instruments included an Entry component, followed by the Background Questionnaire (BQ), and ended with a competencies assessment (paper-based assessment (PBA)) or computer-based assessment (CBA). The survey was administered in the respondent's home by a Statistics Canada interviewer.
The Entry was designed to gather demographic information for each member of the household. Once this information was collected, a respondent was selected from the eligible members of the home.
The BQ was administered to all respondents by Computer Assisted Personal Interview (CAPI). It collected information on ethnicity, immigrant status, age and sex, formal and informal education and training, linguistic information, self-assessment of reading and writing in mother tongue, parental education and occupation, current work status and history, current occupation, industry and earnings, literacy, numeracy and technology skills used at work and at home.
For most of the respondents, the assessment component was completed in the CBA. However, the PBA version was provided to respondents that never used a computer or to those who had failed a series of basic tasks used to assess their computer skills.
The computer-based assessment of the survey assessed literacy, numeracy and problem solving in technology rich environments. The paper base assessment also assessed the literacy and numeracy and had an additional component on reading. The Reading Component included three short sections; word meaning (print vocabulary), sentence processing and basic passage comprehension.
Sampling
This is a sample survey with a cross-sectional design.
The frame was the 2011 Census and the National Household Survey (NHS). The 2011 Census was used for the general sample of adults aged 16 to 65, while the NHS was used for the Aboriginal and Immigrant supplementary samples. When the Census was used as a frame, only households that were not also selected for the NHS were eligible to be selected. However, some exceptions to this occurred in the territories where all households in a Census collection unit may have been selected for the NHS. In total, approximately 49,000 individuals were selected.
Sample selection occurred in up to three stages. In the first stage, geographical clusters were selected. These clusters were previously stratified into urban and rural strata. Subsequently, households were selected from the Census or NHS within each selected cluster. Then within each selected household, one individual was chosen to participate in this survey.
The selection of clusters and households was done by systematic probability proportional to size sampling. Within a household, one individual was selected at random.
Data sources
Data collection for this reference period: 2011-11-01 to 2012-06-30
Responding to this survey is voluntary.
Data are collected directly from survey respondents.
The interview was composed of four main components:
- The entry was used for the validation of the household address and the selection of the respondent
- The background questionnaire component
- The skills assessment component
- The exit component which attributed the final outcome code.
The entry, the background questionnaire and the exit were administered by an interviewer in the respondent's household using a computer assisted interview (CAI). The skill assessment component was auto-administered and most of the time using a computer-based assessment (CBA), but under certain circumstances it was also a paper-based assessment (PBA).
The entry component was answered by any member of the household. First the address was confirmed to assure that the household was really the one selected. A list of household members was collected and all eligible members were identified. Then one of the eligible household members was randomly selected to complete the rest of the interview. Proxy interviews were not accepted.
View the Questionnaire(s) and reporting guide(s) .
Error detection
The PIAAC was collected using a computer-assisted survey application. As such, many of the error detection and editing took place during collection. Validation of values outside specified ranges was performed by the interviewer whenever they were flagged by the Blaise application on the computer and the application automatically directed the flow of the questionnaire based on pre-arranged logic and the respondent's previous answers.
Once the data were collected and transmitted to head office, three phases of error detection were initiated. The first was a general clean-up of the data to accomplish the following goals: 1) remove duplicate records from the file, 2) verify the Background Questionnaire against the sample file, 3) verify the integrity of the status code, 4) identify missing records, and, 5) create a response file.
The editing phase of the data processing done by the international consortium included a series of edit steps to be performed. First, a top-down flow edit cleaned up any paths that may have been mistakenly followed during the interview. This step was followed by consistency edits for certain key variables. This step assured concordance between variables such as age, year of immigration, number of years of formal education, age when the respondent took a type of training, and age when the respondent completed his/her highest level of education.
Imputation
Imputation was minimal. Only the task language was imputed for respondents who did not complete the skill assessment component.
Estimation
Estimates are produced using weights attached to each sampled unit. The weight of a sampled unit indicates the number of units in the population that the unit represents. The weights were calculated in several steps:
1) An initial weight was calculated as the inverse of the probability of selecting a unit in the sample. The overall probability of selecting a given unit was equal to the product of its probabilities of being selected at each phase and at each stage of the selection process.
2) The weights were adjusted to account for non response. This process consisted in distributing the weights of the non-responding units on the weights of the responding units. It was conducted in four steps that took into account the information available about the eligibility status of the non-responding households, whether non-response was related to literacy or not, and the presence of a disability preventing participation to the survey.
3) Because of the overlap between the populations targeted by each selected sample, weights of the general sample and the various supplementary samples were integrated using a multiple-frame method.
4) Finally, weights have been calibrated so that some of the totals produced using the survey data matched population totals from other sources.
The quality of the estimates was assessed using estimates of their coefficient of variation (CV). Given the complexity of the PIAAC survey design, CVs could not be calculated using a simple formula. Jackknife replicate weights were used to establish the CVs of the estimates.
Quality evaluation
Guidelines were followed and supplemented by adherence to Statistics Canada's own internal policies and procedures.
The interviews were conducted in homes in a neutral, non pressured manner. Interviewer training and supervision were provided, emphasizing the importance of precautions against non response bias. Interviewers were specifically instructed to return several times to non respondent households to obtain as many responses as possible. Their work was supervised using frequent quality checks, especially at the outset of data collection. About 10% of the interviews of each interviewer were validated to insure data quality.
As a condition of participation in the international study, it was required to capture and process files using procedures that ensured logical consistency and acceptable levels of data capture error
Scoring of the psychometric assessment in the Computer Based Assessment (CBA) was done automatically by the computer. Quality and consistency of the scoring across countries were validated during the pilot study. Persons charged with scoring of the paper assessment received intense training on scoring open ended responses, using the PIAAC scoring manual. To aid in maintaining scoring accuracy and comparability between countries, the PIAAC survey used an electronic bulletin board, where countries could post their scoring questions and received scoring decisions from the domain experts. This information could be seen by all participating countries, and they could then adjust their scoring. To further ensure quality, monitoring of the scoring was done in two ways.
First, over 40% of the paper assessment was double scored. The goal in PIAAC scoring was to reach a within country inter-rater reliability of 0.95 (95% agreement) across all items, with at least 85% agreement for each item. In fact, most of the intra- country scoring reliabilities were above 95 per cent. Second, the Consortium developed a cross-country reliability study where a set of anchor booklets were used to check the consistency of scorers across countries and to ensure they were applying the same criteria when scoring the items. The anchor booklets consisted of a set of 180 "completed" English booklets that were scored and rescored by every country. Canada had a within-country agreement above 97 per cent across items.
Coding
Other than the standard quality control practices performed by the Operations and Integration Division in Statistics Canada for all coding, we carried out a number of additional quality control measures mandated by the International PIAAC Consortium. This was to ensure that the coding was performed in a uniform way within and across countries and with an acceptable quality.
Those procedures included: manually coded occupation and sector of industry data was 50% verified by another coder. The average error rate for manually coded data did not exceed 10% for codes at the 4 digit level. Also, Statistics Canada checked the quality of the PIAAC coding of the respondent's highest educational level, occupation and industry against the distribution in the most recent Labour Force Survey and National Household Survey, and they were comparable.
Disclosure control
Statistics Canada is prohibited by law from releasing any information it collects which could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.
Revisions and seasonal adjustment
This methodology does not apply to this survey.
Data accuracy
The level of coverage of the Canadian population aged between 16 and 65 years old by the 2011 Census, which data were used as a survey frame for this survey, in conjunction with the National Household Survey data, is evaluated at 96.4% at the national level, from 94.9% to 98.6% at the provincial level, and from 91.5% to 94.6% in the Territories.
RESPONSE RATE:
The response rate is 58.3 % at the national level, and varies between 50.7% and 63.9% at the provincial or territorial level.
Coefficient of variation (CV):
Quality of the estimates is measured using estimates of their coefficient of variation (CV). Jackknife replication weights are used to calculate the CV of the estimates. The following guidelines are suggested:
- If the CV is less than 16%, the estimate can be used without restriction.
- If the CV is between 16% and 33%, the estimate should be used with caution.
- If the CV is 33% or more, the estimate should not be released.
NON-SAMPLING ERROR
Over a large number of observations, randomly occurring non-sampling errors will have little effect on estimates derived from the survey. However, errors occurring systematically will contribute to biases in the survey estimates. Considerable time and effort was made to reduce non-sampling errors in the survey. Quality assurance measures were implemented at each step of the data collection and processing cycle to monitor the quality of the data. These measures included the use of highly skilled interviewers, extensive training of interviewers with respect to the survey procedures and questionnaire, observation of interviewers to detect problems of questionnaire design or misunderstanding of instructions, procedures to ensure that data capture errors were minimized and coding and edit quality checks to verify the processing logic.
NON-RESPONSE BIAS
A major source of non-sampling errors in surveys is the effect of non-response on the survey results. The extent of non-response varies from partial non-response (failure to answer just one or some questions) to total non-response. Total non-response occurred when the interviewer was either unable to contact the respondent, no member of the household was able to provide the information, or the respondent refused to participate in the survey. The national non-response rate for the PIAAC was around 38%. Analysis of the PIAAC non-respondents characteristics tend to say that these people are concentrated in given groups (which means that the non-response does not seem to have been random). Non-response weighting adjustments were performed to compensate for total non-response. These adjustments were designed to reduce the non-response bias as much as possible, using, among others, variables that were linked to the response probability. Partial non-response to the survey occurred, in most cases, when the respondent did not understand or misinterpreted a question, refused to answer a question, or could not recall the requested information. Generally, the extent of partial non-response was small in the PIAAC.
COVERAGE ERROR
The use of the 2011 Census insured that the PIAAC frame was as inclusive as possible and that any exclusions could be effectively taken into consideration into the overall survey design.
OTHER NON-SAMPLING ERRORS
A number of other potential sources of non-sampling error that are unique to the PIAAC deserve comment. Firstly, some of the respondents may have found the test portion of the study intimidating and this may have had a negative effect on their performance. Unlike "usual" surveys, the PIAAC test items have "right" and "wrong" answers. Also, for many respondents this would have been their first exposure to a "test" environment in a considerable number of years. Further, although interviewers did not enforce a time limit for answering questions, the reality of having someone watching and waiting may have, in fact, imposed an unintentional time pressure. It is recognized, therefore that even though items were chosen to closely reflect everyday tasks, the test responses might not fully reveal the literacy capabilities of respondents due to the testing environment. Further, although the test nature of the study called for respondents to perform the activities completely independently of others, situations in the real world often enable persons to sort through printed materials with family, friends and associates. It could be therefore, that the skills measured by the survey do not reflect the full range of some respondents' abilities in a more natural setting.
Another potential source of non-sampling error for the PIAAC relates to the scoring of the test items, particularly those that were scored on a scale (e.g. items that required respondents to write). Special efforts such as centralizing the scoring and sample verification were made to minimize the extent of scoring errors. And as mentioned previously, a large proportion of the scoring was done by the computer; it increases importantly the quality of the scoring.
- Date modified: