Annual Retail Trade Survey (RETR)

Detailed information for 2017

Status:

Active

Frequency:

Annual

Record number:

2447

This survey collects the financial and operating data needed to develop national and regional economic policies and programs.

Data release - March 22, 2019

Description

This survey collects data required to produce economic statistics for the retail trade in Canada.

Data collected from businesses are aggregated with information from other sources to produce official estimates of national and provincial economic production for this sector.

Survey estimates are made available to businesses, governments, investors, associations, and the public. The data are used to monitor industry growth, measure performance, and make comparisons to other data sources to better understand this sector.

Statistical activity

The survey is administered as part of the Integrated Business Statistics Program (IBSP). The IBSP has been designed to integrate approximately 200 separate business surveys into a single master survey program. The IBSP aims at collecting industry and product detail at the provincial level while minimizing overlap between different survey questionnaires. The redesigned business survey questionnaires have a consistent look, structure, and content.

The integrated approach makes reporting easier for firms operating in different industries because they can provide similar information for each branch operation. This way they avoid having to respond to questionnaires that differ for each industry in terms of format, wording and even concepts. The combined results produce more coherent and accurate statistics on the economy.

Reference period: The calendar year, or the 12-month fiscal period for which the final day occurs on or between April 1st of the reference year and March 31st of the following year.

Collection period: April through October of the year after the reference period

Subjects

  • Retail and wholesale

Data sources and methodology

Target population

The target population consists of all establishments classified to the codes 441 through 453 according to the North American Industry Classification System (NAICS) 2017 during the reference year.

The observed population consists of all establishments classified to the codes 441 through 453 according to the NAICS 2017 found on Statistics Canada Business Register as of the last day of the reference year (including establishments active for a part of the reference year).

Instrument design

The survey questionnaires comprise generic modules that have been designed to cover the retail trade sector. These modules include revenues and expenses. The questionnaires also include industry-specific modules designed to ask for financial and non-financial characteristics that pertain specifically to this industry.

In order to reduce response burden, smaller firms receive a characteristics questionnaire (shortened version) which only include the industry-specific modules. For smaller firms, revenue and expense data are extracted from administrative files.

The questionnaire was developed in consultation with potential respondents, data users and questionnaire design specialists.

Sampling

This is a sample survey with a cross-sectional design.

The Business Register is a repository of information reflecting the Canadian business population and exists primarily for the purpose of supplying frames for all economic surveys in Statistics Canada. It is designed to provide a means of coordinating the coverage of business surveys and of achieving consistent classification of statistical reporting units. It also serves as a data source for the compilation of business demographic information.

The major sources of information for the Business Register are updates from the Statistics Canada survey program and from Canada Revenue Agency's (CRA) Business Number account files. This CRA administrative data source allows for the creation of a universe of all business entities.

The data provided in our products reflects counts of statistical locations by industrial activity (North American Industry Classification System), geography codes, and employment size ranges.

SAMPLING UNIT
The sampling unit is the enterprise, as defined on the Business Register.

STRATIFICATION METHOD
Prior to the selection of a random sample, enterprises are classified into homogeneous groups (i.e., groups with the same NAICS codes and same geography) based on the characteristics of their establishments. Then, each group is divided into sub-groups (i.e. small, medium, large) called strata based on the annual revenue of the enterprise.

SAMPLING AND SUB-SAMPLING
Following stratification, a sample, of a predetermined size, is allocated into each stratum, with the objective of optimizing the overall quality of the survey while respecting the available resources. The sample allocation can result in two kinds of strata: take-all strata where all units are sampled with certainty, and take-some strata where a sample of units are randomly selected.

The total sample size for this survey is approximately 4,800 enterprises.

Data sources

Data collection for this reference period: 2018-04-27 to 2018-10-27

Responding to this survey is mandatory.

Data are collected directly from survey respondents and extracted from administrative files.

Data are collected annually primarily through electronic questionnaire, while providing respondents with the option of receiving a paper questionnaire, replying by telephone interview or using other electronic filing methods. Follow-up for non-response and for data validation is conducted by telephone or fax.

View the Questionnaire(s) and reporting guide(s) .

Error detection

Error detection is an integral part of both collection and data processing activities. Automated edits are applied to data records during collection to identify reporting and capture errors. These edits identify potential errors based on year-over-year changes in key variables, totals, and ratios that exceed tolerance thresholds, as well as identify problems in the consistency of collected data (e.g., a total variable does not equal the sum of its parts). During data processing, other edits are used to automatically detect errors or inconsistencies that remain in the data following collection. These edits include value edits (e.g., Value > 0, Value > -500, Value = 0), linear equality edits (e.g., Value1 + Value2 = Total Value), linear inequality edits (e.g., Value1 >= Value2), and equivalency edits (e.g., Value1 = Value2). When errors are found, they can be corrected using the failed edit follow up process during collection or via imputation. Extreme values are also flagged as outliers, using automated methods based on the distribution of the collected information. Following their detection, these values are reviewed in order to assess their reliability. Manual review of other units may lead to additional outliers identified. These outliers are excluded from use in the calculation of ratios and trends used for imputation, and during donor imputation. In general, every effort is made to minimize the non-sampling errors of omission, duplication, misclassification, reporting and processing.

Imputation

When there are non-reported tax data, or when reported data are considered incorrect during the error detection steps, imputation is used to fill in the missing information and modify the incorrect information. Many methods of imputation may be used to complete the administrative data, including manual changes made by an analyst. The automated, statistical techniques used to impute the missing data include deterministic imputation, replacement using historical data (with a trend calculated, when appropriate), replacement using auxiliary information available from other sources, replacement based on known data relationships for the sample unit, and replacement using data from a similar unit in the sample (known as donor imputation). Usually, key variables are imputed first and are used as anchors in subsequent steps to impute other, related variables.

Imputation generates a complete and coherent microdata file that covers all survey variables.

Estimation

The sample used for estimation comes from a single-phase sampling process. An initial sampling weight (the design weight) is calculated for each unit of the survey and is simply the inverse of the probability of selection that is conditional on the realized sample size. The weight calculated for each sampling unit indicates how many other units it represents. The final weights are usually either one or greater than one. Sampling units which are "Take-all" (also called "must-take") have sampling weights of one and only represent themselves.

Estimation of totals is done by simple aggregation of the weighted values of all estimation units that are found in the domain of estimation. Estimates are computed for several domains of estimation such as industrial groups and provinces/territories, based on the most recent classification information available for the estimation unit and the survey reference period. It should be noted that this classification information may differ from the original sampling classification since records may have changed in size, industry or location. Changes in classification are reflected immediately in the estimates.

When some enterprises have reported data combining many units located in more than one province or territory, or in more than one industrial classification, data allocation is required. Factors based on information from sources such as tax files and Business Register profiles are used to allocate the data reported on the combined report among the various estimation units where this enterprise is in operation. The characteristics of the estimation units are used to derive the domains of estimation, including the industrial classification and the geography.

Units with larger than expected size are seen as misclassified and their weight is adjusted so that they only represent themselves (large units found in a stratum of small units for example).

The weights can be modified and adjusted using updated information from taxation data. Using a statistical technique called calibration, the final set of weights is adjusted in such a way that the sample represents as closely as possible the taxation data of the population of this industry.

Quality evaluation

Prior to the data release, combined survey results are analyzed for comparability; in general, this includes a detailed review of individual responses (especially for the largest companies), general economic conditions and coherence with results from related economic indicators, historical trends, and information from other external sources (e.g. associations, trade publications or newspaper articles).

Disclosure control

Statistics Canada is prohibited by law from releasing any information it collects that could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.

In order to prevent any data disclosure, confidentiality analysis is done using the Statistics Canada generalized confidentiality system (G-CONFID). G-CONFID is used for primary suppression (direct disclosure) as well as for secondary suppression (residual disclosure). Direct disclosure occurs when the value in a tabulation cell is composed of or dominated by few enterprises while residual disclosure occurs when confidential information can be derived indirectly by piecing together information from different sources or data series.

Revisions and seasonal adjustment

Revisions in the raw data are required to correct known non-sampling errors. These normally include replacing imputed data with reported data and respondent corrections to previously reported data.

Raw data are revised, on an annual basis, for the year immediately prior to the current reference year being published. That is, when data for the current year are being published for the first time, there will also be revisions, if necessary, to the raw data for the previous year.

Data accuracy

The methodology of this survey has been designed to control errors and to reduce their potential effects on estimates. However, the survey results remain subject to errors, of which sampling error is only one component of the total survey error. Sampling error results when observations are made only on a sample and not on the entire population. All other errors arising from the various phases of a survey are referred to as non-sampling errors. For example, these types of errors can occur when a respondent provides incorrect information or does not answer certain questions; when a unit in the target population is omitted or covered more than once; when GST data for records being modeled for a particular month are not representative of the actual record for various reasons; when a unit that is out of scope for the survey is included by mistake or when errors occur in data processing, such as coding or capture errors.

Prior to publication, combined survey results are analyzed for comparability; in general, this includes a detailed review of individual responses (especially for large businesses), general economic conditions and historical trends.

A common measure of data quality for surveys is the coefficient of variation (CV). The coefficient of variation, defined as the standard error divided by the sample estimate, is a measure of precision in relative terms. Since the coefficient of variation is calculated from responses of individual units, it also measures some non-sampling errors.

The formula used to calculate coefficients of variation (CV) as percentages is:

CV (X) = S(X) * 100% / X
where X denotes the estimate and S(X) denotes the standard error of X.

Confidence intervals can be constructed around the estimates using the estimate and the CV. Thus, for our sample, it is possible to state with a given level of confidence that the expected value will fall within the confidence interval constructed around the estimate. For example, if an estimate of $12,000,000 has a CV of 2%, the standard error will be $240,000 (the estimate multiplied by the CV). It can be stated with 68% confidence that the expected values will fall within the interval whose length equals the standard deviation about the estimate, i.e. between $11,760,000 and $12,240,000.

Alternatively, it can be stated with 95% confidence that the expected value will fall within the interval whose length equals two standard deviations about the estimate, i.e. between $11,520,000 and $12,480,000.

Finally, due to the small contribution of the non-survey portion to the total estimates, bias in the non-survey portion has a negligible impact on the CVs. Therefore, the CV from the survey portion is used for the total estimate that is the summation of estimates from the surveyed and non-surveyed portions.

RESPONSE RATE
The weighted collection response rate is 93.66%.

NON-SAMPLING ERROR
Non-sampling error is not related to sampling and may occur for various reasons during the collection and processing of data. For example, non-response is an important source of non-sampling error. Under or over-coverage of the population, differences in the interpretations of questions and mistakes in recording, coding and processing data are other examples of non-sampling errors.

NON-RESPONSE BIAS
To the maximum extent possible, these errors are minimized through careful design of the survey questionnaire, verification of the survey data, and follow-up with respondents when needed to maximize response rates.

Also, when non-response occurs, it is taken into account and the quality is reduced based on its importance to the estimate. Other indicators of quality are also provided such as the response rate.

COVERAGE ERROR
Coverage errors consist of omissions, erroneous inclusions, duplications and misclassification of units in the survey frame.

The Business Register (BR) is the common frame for all surveys using the IBSP model. The BR is a data service centre updated through a number of sources including administrative data files, feedback received from conducting Statistics Canada business surveys, and profiling activities including direct contact with companies to obtain information about their operations and Internet research findings. Using the BR will ensure quality, while avoiding overlap between surveys and minimizing response burden to the greatest extent possible.

Documentation

Date modified: