International Survey of Reading Skills (ISRS)
Detailed information for October 2004 to February 2005
The International Survey of Reading Skills (ISRS) was designed to characterize the reading profiles and learning needs of demographically different groups of low skilled Canadian adults by administering a battery of clinical reading tests to a sample of adults who previously had participated in the 2003 International Adult Literacy and Skills Survey (IALSS, record number 4406). The survey aims to help educators tailor the content and modalities of their instructions to the needs of low skilled adults.
Data release - January 9, 2008
The International Survey of Reading Skills (ISRS) was a follow-up to the International Adult Literacy and Skills Survey (IALSS, record number 4406) conducted by Statistics Canada in March 2003. The ISRS was part of an international survey being undertaken by Statistics Canada in partnership with, Human Resources and Skills Development Canada (HRSD), National Centre for Education Statistics (NCES) and Education Testing and Service (ETS)
The survey was designed to measure competency in particular skills thought to be important to better understand the reading skills or ability of adults with low literacy skills. More precisely the goal of the survey was to better understand the relationship between underlying reading skills, such as word recognition, vocabulary, and spelling and the performance on the prose and document literacy scale from IALSS. Other objectives are to:
- Measure the underlying reading skills described above and show how the results are distributed over the population
- Group low-literacy adults according to their underlying reading skills in order to identify:
- The size of each group
- The common socio-economic characteristics of each group
- An appropriate curriculum for each group based upon their socio-economic characteristics and their underlying reading skills
- Compare the underlying reading skills results of low literacy and high literacy adults
- Identify the relationship between these underlying reading skills to other characteristics of adult with low literacy
Users of the data include federal and provincial governments, academics, literacy and skills development professionals, media and interested members of the public. The data are used to inform policy decisions, help effectively allocate resources where needed and inform decisions on the composition and content of remedial skill development course and adult education.
- Education, training and learning
Data sources and methodology
This survey was a follow-up to the IALSS study. The target population were adults between 16 and 65 years of age residing in the ten Canadian provinces at the time of IALSS data collection (June 2003), excluding institutional residents, members of the armed forces, and individuals living on Indian Reserves or remote regions.
Residents of sparsely populated regions were also excluded from the survey population for operational reasons. Even when combined with the first exclusions listed above, this represented no more than 2% of the total population in the ten provinces.
The survey instruments included a background questionnaire, measured two aspects of adult literacy (prose and document literacy) and measured five reading-related skills: word recognition, vocabulary, listening comprehension, general reading processing skills and spelling.
The background questionnaire consisted of several information modules required to relate the tested skills to the respondents' economic and social situations. More precisely, questions on education, language used by respondents in various situations, labour force status of respondents and questions about health and disabilities. The design of the background questionnaire involved several US and Canadian research teams. Most of the questions used in the survey were existing questions from US, Canadian or other international surveys.
Prose literacy skills and document literacy skills were assessed with a set of tasks designed to simulate day-to-day reading and writing activities, such as signing a library card, reading an advertisement, and following a recipe. The tasks were arranged in two booklets: a Core Task Booklet containing nine simple tasks, and a Main Task Booklet containing 31 tasks divided into two blocks. Respondents were asked to complete the Core Task Booklet first, and if they had at least three correct answers, they were asked to complete the Main Task Booklet. Using a 2-PL IRT model, the respondents on the final data file have been assigned plausible proficiency values ranging from 0 to 500 in each of the measured domains.
The design of the task booklets was done by Education Testing Service (ETS) and task items included in the task booklets were chosen from the IALSS 2003 and IALS 1994 surveys and the same booklets to the ISRS ones had been used in a similar survey implemented in the United States in 2003. These task items were then adapted for the Canadian survey. The creation of the task items in IALSS and IALS were done by a group of experts from around the world. The tasks were first tested in different countries in pilot surveys and then the final items were selected based on the stability of the parameters. Additional information prose and document scale and examples of the type of tasks found in these booklets are included in Annex B of the free publication "Building on our Competencies: Canadian Results of the International Adult Literacy and Skills Survey" (available through the online catalogue number 89-617-XIE or through the link "Publications" in the side bar menu above).
This is a sample survey with a cross-sectional design.
A stratified, multi-phase and multi-stage sample design was used to select the ISRS sample. As ISRS is a follow-up of IALSS, its first-phase sample design is the same as that used for IALSS.
The IALSS used a cross-sectional multi-stage sample design. The sampling unit was the household. The sampling frame was the 2001 Census of the Population (reference date, May15th). A stratified multi stage probability sample design was used to select the sample from the census frame.
Each province was divided into two strata: an urban stratum consisting of larger urban centres, and a rural stratum consisting of the remaining smaller urban centres and rural areas.
Within the urban strata, two stages of sampling were used. In the first stage of sampling, dwellings were selected systematically with probability proportional to household size. In the second stage, one individual was randomly selected from the list of eligible household members residing in the selected dwelling at the time of data collection.
In order to reduce collection costs, three stages of sampling were used in the rural strata. The rural stratum of each province was partitioned into smaller geographic areas called clusters. The first stage of sampling consisted of selecting clusters with probability proportional to population size. The second stage of sampling consisted of selecting dwellings from within each selected cluster. Like the urban strata, the final stage of sampling consisted of the random selection of one eligible individual from each selected dwelling.
A base sample of 16,000 dwellings was selected to cover the general population. An additional 24,000 dwellings were selected in supplementary samples targeting the following subpopulations: Francophones in New Brunswick, Ontario and Manitoba; Anglophones in Québec; immigrants in Québec, Ontario, Alberta and British Columbia; youth (16-24) in Québec, youth (16-29) in British Columbia; Aboriginal persons in urban Manitoba and Saskatchewan; Aboriginal persons and non-Aboriginal persons in Yukon, the Northwest Territories and Nunavut.
The ISRS was designed to produce Canada-level estimates for the two official languages, English and French. The sample was drawn from IALSS respondents that accepted, during its collection, to participate in a follow-up survey. Since the focus of the study is the adult population with low literacy level, the ISRS sample targets mostly individuals who are at IALSS prose literacy Level 1. Units were then selected randomly within strata defined by the cross-classifications of language (English, French), provinces, subpopulations, literacy levels estimated from IALSS, and age groups. This was done to respect IALSS design as much as possible. Afterwards, the sample was reduced by 8 percent to cut field expenses. Notably, a number of remote and expensive interviewer assignments were eliminated. It could have been best if this drop was accomplished with the drawing of the sample but it was not possible due to time and operational constraints.
Data collection for this reference period: 2004-10-01 to 2005-02-28
Responding to this survey is voluntary.
Data are collected directly from survey respondents.
The ISRS sample was drawn from IALSS respondents that accepted, during its collection, to participate in a follow-up survey. Fifty-seven percent of the IALSS accepted to be re-contacted.
Data were collected in the home of each respondent. Whenever possible, contact was done by phone to confirm the repondent address and make an appointment to visit them at their convenience. Proxy responses were not permitted.
First, respondents were asked to complete a background questionnaire, which consisted of several information modules required to relate the tested skills to the respondents' economic and social situations. Specifically, interviewers asked respondents a series of oral questions about their education, the language they used in various situations and their labour force status and another set of questions about health and disabilities. The median time required to administer the background questionnaire was about 30 minutes.
Once the background questionnaire was completed, respondents were given a short booklet of nine relatively simple reading tasks (Core Task Booklet). Respondents who answered at least three of the nine core tasks correctly were given the much larger, more difficult Main Task Booklet containing 31 tasks. The booklet tests were not timed, and respondents were encouraged to attempt every item. Respondents were given every opportunity to demonstrate their skills, even if they were minimal.
Next, all respondents, regardless of their score on the Core Tasks, were asked to complete a series of additional exercises. Designed to measure reading-related skills, the tests were administered as follows. The first exercise was the abridged Peabody Picture Vocabulary Test (PPVT), which required respondents to identify which of four different images corresponded to a word spoken by the interviewer. Next came the RAN test, in which respondents were asked to read a series of random letters as quickly as possible. The third exercise involved reading a list of real words (TOWRE-A), followed by a list of pseudo-words (TOWRE-B), as quickly as possible. The time limit for each word list was 60 seconds. The fourth exercise was PhonePass, which contained three different tasks: repetition of simple sentences, a set of short-answer questions, and reading of simple sentences. The fifth test involved repeating a series of digits in order and another series of digits in the opposite sequence to how they were read. The final exercise was a spelling test.
The ISRS was a paper and pencil survey, but the RAN, TOWRE, PhonePass and Digit Span tests were recorded over the telephone. Ordinate Corp. developed the telephone recording system. The tests were administered in such a way that the interviewer, using a special phone, called the recording system and the respondent completed the test on the telephone while the interview listened.
The background questionnaire were electronically scanned and the answers were captured by an automated system. Twenty percent of the questionnaires were verified to ensure that this data capture method was done correctly. In addition, all respondent identifyers were checked to ensure that they were captured correctly. Ordinate Corp. sent us electronic sound files which were then used by staff to score tests. All the scoring sheets for the remaining tests were electronically scanned and all of the captured data was verified to ensure data quality.
View the Questionnaire(s) and reporting guide(s) .
No imputation is done for this statistical program.
ISRS population weights were derived in three steps: calculation of the design weights; weighting adjustments for non-response; and calibration. The design weights were defined as the inverse of the probabilities of selection. As the ISRS sample was selected from IALSS respondents, the IALSS weights before calibration were taken as the initial ISRS weights. These weights have been adjusted for IALSS sample selection, IALSS non-response and for the overlapping of the samples. Also, as the ISRS sample was more specifically selected from IALSS respondents that accepted to participate in a follow-up survey, the initial ISRS weights needed to be adjusted to compensate for refusal and non responses to the follow-up question. They were then inflated to take the ISRS sample selection into account by a factor equal to the inverse of the probabilities of selection. Furthermore, they needed to be adjusted for sample cuts that were made after ISRS sample selection. The adjusted weights are the design weights.
The weighting adjustments for non-response were calculated by first categorizing the sample units as either respondents or non-respondent individuals. A logistic regression was used subsequently to determine variables that have an influence on response, obtain predicted probabilities of responding and form weighting classes as homogeneous as possible by grouping similar estimated response probabilities together. Finally, the design weight of each respondent was adjusted by the inverse of the weighted response rate of the weighting class in which the respondent belongs in order to represent all individuals.
Finally, the weights were calibrated using the age and sex group population totals, where the age group was defined as 16-25, 26-35, 36-45, 46-55, and 56-65 years. These were obtained using the 2001 Census of Population and Housing inflated according to the growth measured between provincial age and gender totals from the Census and the corresponding official demographic counts as of June 21, 2003 (midpoint of the IALSS collection). The benchmarks used for ISRS include only the 10 provinces. It was not possible for ISRS to include more benchmarks since the sample size was too small.
To ensure high quality data, International Survey Administration Guidelines were followed and supplemented by adherence to Statistics Canada's own internal policies and procedures.
The interviews were conducted in homes in a neutral, non pressured manner. Interviewer training and supervision were provided, emphasizing the importance of precautions against non response bias. Interviewers were specifically instructed to return several times to non respondent households to obtain as many responses as possible. Their work was supervised using frequent quality checks, especially at the outset of data collection.
As a condition of participation in the international study, it was required to capture and process files using procedures that ensured logical consistency and acceptable levels of data capture error.
Persons charged with scoring received intense training in scoring responses to the open ended items. To ensure quality, monitoring was done in two ways. First, at least 20% of the tasks were re scored. The two sets of scores needed to match with at least 95% accuracy before the next step of processing could begin. Second, an inter-language re score was performed. More specifically, francophone scorers re-scored 20% of the English questionnaires and vice versa in order to ensure consistency. Strict accuracy was demanded: a 90% correspondence was required before the scores were deemed acceptable.
Also to evaluate the quality we compared the results with the International Adult Literacy and Skills Survey (IALSS) and ran analysis on key variables to ensure that observed correlations were founded.
Statistics Canada is prohibited by law from releasing any information it collects that could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.
Revisions and seasonal adjustment
This methodology does not apply to this survey.
It is estimated that the coverage for the survey was 98.5% nationally
The response rate was 64%.
Coefficient of Variation (CV):
The quality of the estimates is assessed using estimates of their CVs. Jackknife replicate weights are used to establish the CVs of the estimates. The following guidelines are recommended:
- If the CV is less than 16%, the estimate can be used without restriction;
- If the CV is between 16% and 33%, the estimate should be used with caution;
- If the CV is 33% or more, or if the estimate is based on fewer than 30 observations, then the estimate should not be released.
The table below shows the CVs for the percent of the population at each prose proficiency level: