Survey on Local Food and Beneficial Management Practices (SLFBMP)

Detailed information for 2022





Record number:


The survey intends to collect information on business conditions and operations regarding local food sales and beneficial management practices adopted by Canadian farms.

Data release - September 5, 2023


Statistics Canada is conducting the Survey on Local Food and Beneficial Management Practices (SLFBMP), in collaboration with Agriculture and Agri-Food Canada (AAFC).

Data will be gathered on local food sales, including:
• types of market channels
• food products sold
• challenges and reasons for selling food locally
• certifications acquired
• participation in government programs.

Data will also be collected on two beneficial management practices (BMP): fertilizers containing urease or nitrification inhibitors and rotational grazing, including questions on:
• acres covered by the practice
• year of adoption
• cost
• impact on yield
• reasons for practicing or not practicing each BMP.

This is intended to gather national and provincial level information on farms' local food sales, and BMP participation.

Reference period: The calendar year.

Collection period: During the year following the reference year


  • Agriculture and food (formerly Agriculture)

Data sources and methodology

Target population

The target population for the survey consists of all businesses in the 10 provinces whose main activity is agriculture that have an indication of agricultural activity coming from the Census of Agriculture, survey feedback or tax files, and that earn an annual revenue above $25,000. To identify these units, a sampling frame was constructed using business with a 3-digit NAICS of either 111 or 112 on the Business Register, and who possess Additional Measures or Survey Specific Fields denoting an agricultural commodity, or an indication of such production from tax filings.

Instrument design

The questionnaire was developed by Statistics Canada in partnership with Agriculture and Agri-Food Canada (AAFC). Questionnaire Design Resource Centre (QDRC) conducted cognitive testing with selected respondents and made recommendations that were implemented to finalize the questionnaire. Data will be collected electronically. Before collection starts, HTML format of the electronic questionnaire will be uploaded onto the IMDB page.


This is a sample survey with a cross-sectional design.

Sampling unit: Establishments

Stratification method
Units are stratified by revenue size, region and farm type. The Atlantic provinces were grouped together as one region due to low population counts for many farm types. Farm type is defined by the main agricultural commodity produced by the farm, usually coming from the NAICS, additional measures, or census of agriculture information. Some of the farm types were collapsed together due to small stratum sizes.

Sampling and sub-sampling
Using a list of all of the agricultural operations from the Business Register, an establishment is assigned to a stratum by province, by farm type and by farm size. The size stratum is determined by the revenues and the assets of the establishment. The target sample size is 10,000 agricultural operations.

Data sources

Data collection for this reference period: 2023-03-13 to 2023-04-14

Responding to this survey is voluntary.

Data are collected directly from survey respondents.

Collection scheduled to begin March 2023.

View the Questionnaire(s) and reporting guide(s) .

Error detection

Error detection is an integral part of data processing activities. Prior to imputation, a series of edits are applied to the collected data to identify errors and inconsistencies. Errors and inconsistencies in the data are reviewed and resolved by referring to data for similar units in the survey and information from external sources. If a record cannot be resolved, it is flagged for imputation. Finally, edit rules are incorporated into the imputation system to detect and resolve any remaining errors, as well as to ensure that the imputed data are consistent.


After microdata verification, a variable was created for each of the survey variables to identify those that had either failed the verification rules or had missing values. Imputation was performed to reduce the amount of missing, inconsistent or incomplete data. The missing data were imputed using a randomly selected donor inside the imputation class. These imputation classes were formed based on statistical analysis performed with frame information or previous variables on the questionnaire.

A minimum number of units was required within each imputation class. When imputation classes were too small, larger classes were created by combining several classes together.

Imputation of survey variables was performed in an automated way using BANFF, a generalized system designed by Statistics Canada.


Estimation is a process by which Statistics Canada obtains values for the population of interest so that it can draw conclusions about that population based on information gathered from only a sample of the population. For this survey, the sample used for estimation comes from a single-phase sampling process.

An initial sampling weight (the design weight) is calculated for each unit of the survey and is simply the inverse of the probability of selection. The weight calculated for each sampling unit indicates how many other units it represents.
However, since some of the selected units did not answer the survey, reweighting is performed on the responding units so that their final weights still represent the whole target population. The response mechanism can be considered as a second-phase of the sampling process.

After the reweighting is performed, a calibration process is performed so that the weighted totals per calibration groups equal the population totals.

Estimation of proportions is done using the calibrated weights to calculate the population totals in the domains of interest.

Quality evaluation

Estimates were reviewed to ensure that the findings are logical and quality checks were carried out to ensure that estimates are consistent. Atypical results were flagged for investigation and were corrected as necessary.

Disclosure control

Statistics Canada is prohibited by law from releasing any information it collects that could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.

In order to prevent any data disclosure, confidentiality analysis is done using the Statistics Canada Generalized Disclosure Control System (G-Confid). G-Confid is used for primary suppression (direct disclosure) as well as for secondary suppression (residual disclosure). Direct disclosure occurs when the value in a tabulation cell is composed of or dominated by few enterprises while residual disclosure occurs when confidential information can be derived indirectly by piecing together information from different sources or data series.

Revisions and seasonal adjustment

This methodology type does not apply to this statistical program.

Data accuracy

There are two types of errors which can impact the data: sampling errors and non-sampling errors.

Estimates are subject to sampling error. This error can be expressed as a standard error. For example, the proportion of businesses in the target population that would respond YES to a given question is estimated to be 50%, with a standard error of 4%. In repeated sampling, the estimate would be expected to fall between 46% and 54%, nineteen times out of twenty. The following rules based on the standard error (SE) are used to assign a measure of quality to all of the estimates of percentages.
A = Excellent (0.00% to less than 2.50%)
B = Very good (2.50% to less than 5.00%)
C = Good (5.00% to less than 7.50%)
D = Acceptable (7.50% to less than 10.00%)
E = Use with caution (10.00% to less than 15.00%)
F = Too unreliable to be published (Greater than or equal to 15%, data are suppressed)

Non-sampling errors may occur for various reasons during the collection and processing of the data. For example, non-response is an important source of non-sampling error. Under or over-coverage of the population, differences in the interpretations of questions and mistakes in recording and processing data are other examples of non-sampling errors. To the maximum extent possible, these errors are minimized through careful design of the survey questionnaire and verification of the survey data.

Date modified: