Annual Survey of Manufacturing and Logging Industries (ASML)
Detailed information for 2016
This survey collects the financial and commodity information used to compile statistics on Canada's manufacturing and logging industries.
Data release - Scheduled for ...
The Annual Survey of Manufacturing and Logging Industries (ASML) is a survey of the manufacturing and logging industries in Canada. It is intended to cover all establishments primarily engaged in manufacturing and logging activities as well as some sales offices and warehouses which support these establishments.
The details collected include principal industrial statistics (such as revenue, salaries and wages, cost of materials and supplies used, cost of energy and water utility, inventories, etc.), as well as information about the commodities produced and consumed. Data collected from businesses will be aggregated with information from other sources to produce official estimates of national and provincial economic production for these industries. The Annual Survey of Manufacturing and Logging Supplementary Content (ASMLS) collects detailed information about the commodities produced and consumed in the manufacturing of softwood products. This information includes quantities and values of materials purchased and sold.
Data collected by the Annual Survey of Manufacturing and Logging Industries are important because they help measure the production of Canada's industrial and primary resource sectors, as well as provide an indication of the well-being of each industry covered by the survey and its contribution to the Canadian economy. Within Statistics Canada, the data are used by the Canadian System of National Accounts, the Monthly Survey of Manufacturing (record number 2101) and Prices programs. The data are also used by the business community, trade associations and federal and provincial departments, as well as international organizations and associations, to profile the manufacturing and logging industries, undertake market studies, forecast demand and develop trade and tariff policies.
The survey is administered as part of the Integrated Business Statistics Program (IBSP). The IBSP program has been designed to integrate approximately 200 separate business surveys into a single master survey program. The IBSP aims at collecting industry and product detail at the provincial level while minimizing overlap between different survey questionnaires. The redesigned business survey questionnaires have a consistent look, structure and content.
The integrated approach makes reporting easier for firms operating in different industries because they can provide similar information for each branch operation. This way they avoid having to respond to questionnaires that differ for each industry in terms of format, wording and even concepts. The combined results produce more coherent and accurate statistics on the economy.
Reference period: The 12 month fiscal period ending between April 1st of the reference year and March 31st of the following year.
Collection period: From April through to September of the year following the reference period.
- Business performance and ownership
- Financial statements and performance
Data sources and methodology
The target population of the ASML comprises all establishments primarily engaged in manufacturing and logging activities. Beginning with reference year 2013, the data are classified by industry based on the North American Industry Classification System (NAICS) 2012. Under the North American Industry Classification System (NAICS), logging establishments are classified to NAICS 1133 and manufacturing establishments to NAICS sectors 31, 32 and 33.
The target population of the ASMLS (supplementary content) is a subset of the ASML survey. This comprises all establishments engaged in the manufacturing of products that were covered under the scope of the 2006 Canada-United States softwood lumber agreement.
The survey questionnaires comprise generic modules that have been designed to cover all manufacturing and logging industries. These modules include revenues, expenses, and employment. The questionnaires also include industry-specific and activity-specific modules designed to ask for financial and non-financial characteristics that pertain specifically to this industry.
In order to reduce respondent burden smaller firms receive a characteristics questionnaire (shortened version) that is industry-specific which does not include the revenue and expense modules. This shortened version is designed to collect both financial and non-financial characteristics, while revenue and expense data are extracted from administrative files.
This is a sample survey with a cross-sectional design.
Data collection for this reference period: 2017-04-21 to 2017-08-31
Responding to this survey is mandatory.
Data are collected directly from survey respondents and extracted from administrative files.
Data are collected primarily through electronic questionnaire, while providing respondents with the option of receiving a paper questionnaire, replying by telephone interview or using other electronic filing methods. Follow-up for non-response and for data validation is conducted by email, telephone or fax.
A strategy to replace survey data with tax data has been introduced to reduce the respondent burden and survey costs.
The strategy involves using tax data instead of survey data for some simple units (for example, a single location and a single activity). As part of the Integrated Business Statistics Program (IBSP), T1 tax data are used for unincorporated businesses and T2 tax data for incorporated businesses. Data replacement may be used to correct outliers or to replace partially or completely missing data. Tax data may also be used to reconcile survey data.
Data integration combines data from multiple data sources including survey data collected from respondents, administrative data from the Canadian revenue agency or other forms of auxiliary data when applicable. During the data integration process, data are imported, transformed, validated, aggregated and linked from the different data source providers into the formats, structures and levels required for IBSP processing. Administrative data are used in a data replacement strategy for a large number of financial variables for most small and medium enterprises and a select group of large enterprises to avoid collection of these variables. Administrative data are also used as an auxiliary source of data for editing and imputation when respondent data is not available.
View the Questionnaire(s) and reporting guide(s).
Error detection is an integral part of both collection and data processing activities. Automated edits are applied to data records during collection to identify reporting and capture errors. These edits identify potential errors based on year-over-year changes in key variables, totals, and ratios that exceed tolerance thresholds, as well as identify problems in the consistency of collected data (e.g. a total variable does not equal the sum of its parts). During data processing, other edits are used to automatically detect errors or inconsistencies that remain in the data following collection. These edits include value edits (e.g. Value > 0, Value > -500, Value = 0), linear equality edits (e.g. Value1 + Value2 = Total Value), linear inequality edits (e.g. Value1 >= Value2), and equivalency edits (e.g. Value1 = Value2). When errors are found, they can be corrected using the failed edit follow up process during collection or via imputation. Extreme values are also flagged as outliers, using automated methods based on the distribution of the collected information. Following their detection, these values are reviewed in order to assess their reliability. Manual review of other units may lead to additional outliers identified. These outliers are excluded from use in the calculation of ratios and trends used for imputation, and during donor imputation. In general, every effort is made to minimize the non-sampling errors of omission, duplication, misclassification, reporting and processing.
When non-response occurs, when respondents do not completely answer the questionnaire, or when reported data are considered incorrect during the error detection steps, imputation is used to fill in the missing information and modify the incorrect information. Many methods of imputation may be used to complete a questionnaire, including manual changes made by an analyst. The automated, statistical techniques used to impute the missing data include: deterministic imputation, replacement using historical data (with a trend calculated, when appropriate), replacement using auxiliary information available from other sources, replacement based on known data relationships for the sample unit, and replacement using data from a similar unit in the sample (known as donor imputation). Usually, key variables are imputed first and are used as anchors in subsequent steps to impute other, related variables.
Imputation generates a complete and coherent micro data file that covers all survey variables.
The sample used for estimation comes from a two phase sampling process. An initial sampling weight (the design weight) is calculated for each unit of the survey and is simply the multiplication of the inverse of the probability of selection from each phase that is conditional on the realized sample size. The weight calculated for each sampling unit indicates how many other units it represents. The final weights are usually either one or greater than one. Sampling units which are "Take-all" have sampling weights of one and only represent themselves.
Estimation of totals is done by simple aggregation of the weighted values of all estimation units that are found in the domain of estimation. Estimates are computed for several domains of estimation such as industrial groups and provinces/territories, based on the most recent classification information available for the estimation unit and the survey reference period. It should be noted that this classification information may differ from the original sampling classification since records may have changed in size, industry or location. Changes in classification are reflected immediately in the estimates.
When some enterprises have reported data combining many units located in more than one province or territory, or in more than one industrial classification, data allocation is required. Factors based on information from sources such as tax files and Business Register profiles are used to allocate the data reported on the combined report among the various estimation units where this enterprise is in operation. The characteristics of the estimation units are used to derive the domains of estimation, including the industrial classification and the geography.
Units with larger than expected size are seen as misclassified and their weight is adjusted so that they only represent themselves (large units found in a stratum of small units for example).
The weights can be modified and adjusted using updated information from taxation data. Using a statistical technique called calibration, the final set of weights is adjusted in such a way that the sample represents as closely as possible the taxation data of the population of this industry.
In the case of the ineligible for sampling portion (also called the "Take-none" portion) of the target population, available auxiliary information (such as related tax data) is simply aggregated to come up with an estimate. If an estimate is required and auxiliary information is not available for a particular variable, modeling using other known auxiliary data is done in order to create data for each unit in the take-none portion. These are then simply aggregated to produce the estimate. The overall estimate includes the estimates from both the surveyed portion and the Take-none portion.
Prior to the data release, combined survey results are analyzed for comparability; in general, this includes a detailed review of: individual responses (especially for the largest companies), general economic conditions, coherence with results from related economic indicators, historical trends, and information from other external sources (e.g., associations, trade publications, newspaper articles).
The survey estimates are also analyzed with trends observed in related Statistics Canada data series (e.g., Monthly Survey of Manufacturing (record number 2101), sub-annual manufacturing commodity surveys).
Statistics Canada is prohibited by law from releasing any information it collects which could identify any person, business, or organization, unless consent has been given by the respondent or as permitted by the Statistics Act. Various confidentiality rules are applied to all data that are released or published to prevent the publication or disclosure of any information deemed confidential. If necessary, data are suppressed to prevent direct or residual disclosure of identifiable data.
In order to prevent any data disclosure, confidentiality analysis for financial and commodity variables is done using the G-CONFID system. G-CONFID is used for primary confidentiality as well as for the secondary suppression (residual disclosure). Direct disclosure or primary confidentiality occurs when the value in a tabulation cell is composed or dominated by few enterprises while residual disclosure occurs when confidential information can be derived indirectly by piecing together information from different sources or data series.
Revisions and seasonal adjustment
The most recent annual data are subject to a one year revision policy.
All surveys are subject to sampling and non-sampling errors. Sampling error occurs because population estimates are derived from a sample of the population rather than the entire population. Non-sampling error is not related to sampling and may occur for various reasons during the collection and processing of data. For example, non-response is an important source of non-sampling error. Under or over-coverage of the population, differences in the interpretations of questions and mistakes in recording, coding and processing data are other examples of non-sampling errors. To the maximum extent possible, these errors are minimized through careful design of the survey questionnaire, verification of the survey data, and follow-up with respondents when needed to maximize response rates.
Measures of sampling error are calculated for each estimate. Also, when non-response occurs, it is taken into account and the quality is reduced based on its importance to the estimate. Other indicators of quality are also provided such as the response rate.
Both the sampling error and the non-response rate are combined into one quality rating code. This code uses letters that range from A to F where A means the data is of excellent quality and F means it is unreliable. Estimates with a quality of F are not published. These quality rating codes can be requested and should always be taken into consideration.
Quality indicator descriptions are: A - Excellent; B - Very good; C - Good; D - Acceptable; E - Use with caution; F - Too unreliable to publish.
- Date modified: