Biannual Livestock Survey
Detailed information for July 1, 2019
2 times per year
The purpose of this survey is to collect up-to-date information on the number of livestock on agricultural operations in Canada. The data are used by agricultural industry analysts and producers to make production and marketing decisions, and by government analysts to monitor the livestock industry and develop agricultural policies in Canada.
Data release - August 22, 2019
The Livestock Survey is administered as part of the Integrated Business Statistics Program (IBSP). The IBSP program has been designed to integrate approximately 200 separate business surveys into a single master survey program. The IBSP aims at collecting industry and product detail at the provincial level while minimizing overlap between different survey questionnaires. The redesigned business survey questionnaires have a consistent look, structure and content. The integrated approach makes reporting easier for firms operating in different industries because they can provide similar information for each branch operation. This way they avoid having to respond to questionnaires that differ for each industry in terms of format, wording and even concepts. The combined results produce more coherent and accurate statistics on the economy.
The Livestock Survey consists of two survey occasions designed to provide inventories of major livestock species on Canadian farms on two specific dates. The January 1 and July 1 surveys collect data related to cattle, hogs and sheep.
The principal data releases include inventories and summarized supply-disposition tables. These data are used by agricultural industry analysts and producers as they make production and marketing decisions and by government analysts to monitor the livestock industry or develop agricultural policies in Canada. The data are used in the calculation of farm income estimates and flow to the Canadian System of National Accounts. Further, the data are used in the calculation of net farm income projections, produced by Agriculture and Agri-Food Canada in co-operation with Statistics Canada and the provinces.
Reference period: January 1, July 1
Collection period: June, December
- Livestock and aquaculture
Data sources and methodology
The target population for the survey consists of all Canadian agriculture operations that have a livestock inventory during the reference year.
The observed population consists of those establishments in the target population for which business information is available on Statistics Canada's Business Register excluding specific farms such as institutional farms, community pastures and farms on First Nations reserves.
The questionnaire was developed by subject matter experts through consultation with industry experts. The Agriculture Division, the Collection, Planning and Research Division, the Operations and Integration Division and the Enterprise Statistics Division of Statistics Canada conduct in-house testing for flow and consistency.
Subject matter experts may change, add or remove questions. This typically happens because of changes in market trends or because of information in debriefing reports from field staff.
New questions were pre-tested in the field in 2016. This included testing the cognitive process of respondents in answering questions and other tests to obtain feedback for the design of the questionnaire.
This is a sample survey with a cross-sectional design.
The stratification and allocation is multi variate, by type and size of livestock operation. The sample is selected using a stratified simple random sampling method.
Data collection for this reference period: 2019-06-03 to 2019-06-24
Responding to this survey is mandatory.
Data are collected directly from survey respondents and extracted from administrative files.
Respondents are contacted primarily by email or letter and given an access code for the electronic questionnaire for the survey, which can be responded to in either official language. Respondents also have the option of receiving a paper questionnaire, replying by telephone interview or using other electronic filing methods. Follow-up is conducted via email, telephone or fax and dynamically prioritized on the basis of weighted response rates and for data validation on discrepancies from predicted values.
Administrative livestock slaughter data obtained under section 13 of the Statistics Act and livestock slaughter data available to the public are provided by Agriculture and Agri-Food Canada. The slaughter data is integrated with survey data and other administrative data to create summarized supply-disposition tables.
Data integration combines data from multiple data sources including survey data collected from respondents, administrative data or other forms of auxiliary data when applicable. During the data integration process, data are imported, transformed, validated, aggregated and linked from the different data source providers into the formats, structures and levels required for IBSP processing. Administrative data are used as an auxiliary source of data for editing and imputation when respondent data is not available.
During analysis of the Biannual Livestock Survey, survey data collected from respondents and administrative data are integrated to create summarized supply-disposition tables.
View the Questionnaire(s) and reporting guide(s) .
Error detection is an integral part of both collection and data processing activities. Automated edits are applied to data records during collection to identify reporting and capture errors. These edits identify potential errors based on year-over-year changes in key variables, totals, and ratios that exceed tolerance thresholds, as well as identify problems in the consistency of collected data (e.g. a total variable does not equal the sum of its parts). During data processing, other edits are used to automatically detect errors or inconsistencies that remain in the data following collection. These edits include value edits (e.g. Value > 0, Value > -500, Value = 0), linear equality edits (e.g. Value1 + Value2 = TotalValue), linear inequality edits (e.g. Value1= Value2), and equivalency edits (e.g. Value1 = Value2). When errors are found, they can be corrected using the failed edit follow up process during collection or via imputation. Extreme values are also flagged as outliers, using automated methods based on the distribution of the collected information. Following their detection, these values are reviewed in order to assess their reliability. Manual review of other units may lead to additional outliers identified. These outliers are excluded from use in the calculation of ratios and trends used for imputation, and during donor imputation. In general, every effort is made to minimize the non-sampling errors of omission, duplication, misclassification, reporting, and processing.
When non-response occurs, when respondents do not completely answer the questionnaire, or when reported data are considered incorrect during the error detection steps, imputation is used to fill in the missing information and modify the incorrect information. Many methods of imputation may be used to complete a questionnaire, including manual changes made by an analyst. The automated, statistical techniques used to impute the missing data include: deterministic imputation, replacement using historical data (with a trend calculated, when appropriate), replacement using auxiliary information available from other sources, replacement based on known data relationships for the sample unit, and replacement using data from a similar unit in the sample (known as donor imputation). Usually, key variables are imputed first and are used as anchors in subsequent steps to impute other, related, variables.
Imputation generates a complete and coherent micro data file that covers all survey variables.
The sample used for estimation comes from a single-phase sampling process. An initial sampling weight (the design weight) is calculated for each unit of the survey and is simply the inverse of the probability of selection that is conditional on the realized sample size. The weight calculated for each sampling unit indicates how many other units it represents. The final weights are usually either one or greater than one. Sampling units which are "Take-all" (also called "must-take") have sampling weights of one and only represent themselves.
Estimation of totals is done by simple aggregation of the weighted values of all estimation units that are found in the domain of estimation. Estimates are computed for several domains of estimation such as industrial groups and provinces/territories, based on the most recent classification information available for the estimation unit and the survey reference period. It should be noted that this classification information may differ from the original sampling classification since records may have changed in size, industry or location. Changes in classification are reflected immediately in the estimates.
In the case of the ineligible for sampling portion (also called take-none portion) of the target population, modeling using Census of Agriculture data is done in order to create data for all requested variables for each unit in the take-none portion. These are also simply aggregated to produce the estimate. The overall estimate includes the estimates from both the surveyed portion and the take-none portion.
Prior to the data release, combined survey results are analyzed for comparability; in general, this includes a detailed review of: individual responses (especially for the largest companies), general economic conditions, coherence with results from related industry indicators, historical trends, and information from other external sources (e.g. associations, trade publications, newspaper articles).
Biological factors affecting livestock are used as a guide when evaluating the data or comparing to other data sets. A primary tool in the evaluation and final determination of the data involves supply-demand analysis and survey-based ratios that track the supply and demand of the particular type of livestock by province over time.
Statistics Canada is prohibited by law from releasing any information it collects that could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.
In order to prevent any data disclosure, confidentiality analysis is done to detect primary suppressions (direct disclosure) as well as secondary suppressions (residual disclosure). Direct disclosure occurs when the value in a tabulation cell is composed of or dominated by few enterprises while residual disclosure occurs when confidential information can be derived indirectly by piecing together information from different sources or data series.
Revisions and seasonal adjustment
Once every five years, the published livestock data are aligned with the results of the Census of Agriculture. Due to conceptual differences between the datasets, the match is not normally 1 to 1. For instance, the 2016 Census was conducted on May 10 and the 2016 livestock statistics referred to July 1. Any adjustments made to the data during the Census year are then smoothed in over the historical five-year period between the Censuses. The impact of the revisions is normally less than 5%, however, for specific livestock in certain provinces, the impact can be higher.
All surveys are subject to sampling and non-sampling errors. Sampling error occurs because population estimates are derived from a sample of the population rather than the entire population. Non-sampling error is not related to sampling and may occur for various reasons during the collection and processing of data. For example, non-response is an important source of non-sampling error. Under or over-coverage of the population, differences in the interpretations of questions and mistakes in recording, coding and processing data are other examples of non-sampling errors. To the maximum extent possible, these errors are minimized through careful design of the survey questionnaire, verification of the survey data, and follow-up with respondents when needed to maximize response rates.
Measures of sampling error are calculated for each estimate. Also, when non-response occurs, it is taken into account and the quality is reduced based on its importance to the estimate. Other indicators of quality are also provided such as the response rate. Both the sampling error and the non-response rate are combined into one quality rating code. This code uses letters that ranges from A to F where A means the data is of excellent quality and F means it is unreliable. These quality rating codes can be requested and should always be taken into consideration.
- Date modified: