Report of Crushing Operations

Detailed information for November 2000





Record number:


This is a census of plants that crush oilseeds into oil and meal. Data collected are part of supply-disposition statistics of major grains and allow the calculation of the domestic disappearance component.

Data release - December 12, 2000


This is a census of plants that crush oilseeds into oil and meal. Data collected are part of supply-disposition statistics of major grains and allow the calculation of the domestic disappearance component. They are also required to verify grain production and farm stocks. The data are used by the provincial governments, the Food and Agriculture Organization (FAO) and related industries for market analysis, particularly of supply-disposition of grain.

Reference period: Month

Collection period: The first ten days following the reference month.


  • Agriculture and food (formerly Agriculture)
  • Crops and horticulture

Data sources and methodology

Target population

The target population consists of all known plants which crush oilseeds to produce oil and meal. The survey frame excludes plants that crush oilseeds for specialty health food stores. The total sample size for this survey is 8 units.

Instrument design

The electronic questionnaire was designed by Statistics Canada as part of the Integrated Business Statistics Program. This program incorporates business surveys into a single framework, using questionnaires with a consistent look, structure and content.

The questionnaire content was developed by subject matter specialists through consultation with industry experts.


This survey is a census with a cross-sectional design.

Data sources

Responding to this survey is mandatory.

Data are collected directly from survey respondents.

Respondents are contacted by email or letter and given an access code for the electronic questionnaire for the survey, which can be responded to in either official language. Non-response follow-up is conducted via telephone.

The survey, on average, takes respondents 15 minutes to complete.

View the Questionnaire(s) and reporting guide(s) .

Error detection

Error detection is an integral part of both collection and data processing activities. Edits are applied to data records during collection to identify reporting and capture errors. These edits identify potential errors based on year-over-year changes in key variables, and totals, as well as identify problems in the consistency of collected data (e.g. a total variable does not equal the sum of its parts). During data processing, other edits are used to automatically detect errors or inconsistencies that remain in the data following collection. These edits include value edits (e.g. Value > 0, Value > -500, Value = 0), linear equality edits (e.g. Value1 + Value2 = Total Value), linear inequality edits (e.g. Value1 >= Value2), and equivalency edits (e.g. Value1 = Value2). When errors are found, they can be corrected with manual corrections during collection. Manual review of other units may lead to the identification of outliers. These outliers are excluded from use in the calculation of ratios and trends used for imputation. In general, every effort is made to minimize the non-sampling errors of omission, duplication, misclassification, reporting and processing.

The data reported by each company are verified by comparison to previous reports, by comparing trends between companies, by validating average oil and meal extraction rates, by supply-disposition analysis and by monitoring of industry trends. Data are also validated against industry reports, particularly those prepared by the Canadian Oilseed Processors Association.


When non-response occurs, or when respondents do not completely answer the questionnaire, imputation is used to fill in the missing information. Many methods of imputation may be used to complete a questionnaire, including manual changes made by an analyst. The automated statistical techniques used to impute the missing data include: deterministic imputation, as well as replacement using historical data (with a trend calculated, when appropriate). Usually, key variables are imputed first and are used as anchors in subsequent steps to impute other, related variables. Imputation generates a complete and coherent micro data file that covers all survey variables.


This methodology type does not apply to this statistical program.

Quality evaluation

Data quality is maintained by standard editing techniques that are particularly rigorous with this survey because it is small. Data discrepancies are either scrutinized by professional staff or the company involved is contacted. Average extraction rates and industry information are used for verification. Further verification is done through supply-demand analyses. There are no sampling errors as this is a census.

Disclosure control

Statistics Canada is prohibited by law from releasing any data which would divulge information obtained under the Statistics Act that relates to any identifiable person, business or organization without the prior knowledge or the consent in writing of that person, business or organization. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.

Date modified: