Department of Labor Logo United States Department of Labor
Dot gov

The .gov means it's official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Business Response Survey

BRS 2021 Technical Notes

You can send comments or questions to the Business Response Survey (BRS) staff by email.


Methodology

Data for the 2021 Business Response Survey (BRS) were collected from July 27 through September 30, 2021. The BRS relied on the existing data collection instrument of the BLS QCEW program’s Annual Refiling Survey (ARS). BRS survey responses were solicited via email and printed letters. Responses were collected online using the platform that is consistently relied on by the ARS. This allows for a large, nationally representative sample to be surveyed with minimal financial costs to BLS.

Definitions

Establishments. An individual establishment is generally defined as a single physical location at which one, or predominantly one, type of economic activity is conducted. Most employers covered under the state UI laws operate only one place of business.

North American Industry Classification System (NAICS) codes. NAICS codes are the standard used by federal statistical agencies in classifying business establishments for the purpose of collecting, analyzing, and publishing statistical data. The BRS is based on 2017 NAICS.

Large/small. For these data, establishments with 2020 annual average employment greater than 499 are considered large.

Sample Design and Selection Procedures

For the 2021 BRS, BLS selected a stratified sample of 322,560 establishments from a universe of just over 8.6 million establishments. The universe source was the set of establishments from the 2020 fourth quarter BLS Business Register that were identified as in-scope for this survey.

The BLS Business Register is a comprehensive quarterly business name and address file of employers subject to state Unemployment Insurance (UI) laws. It is sourced from data gathered by the QCEW program. Each quarter, QCEW employment and wage information is collected and summarized at various levels of geography and industry. Geographic breakouts include county, Metropolitan Statistical Area (MSA), state, and national. Industry breakouts are based on the six-digit NAICS.

The QCEW covers all fifty states, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands. The primary sources of data for these 53 entities are the Quarterly Contributions Reports (QCRs) submitted to State Workforce Agencies (SWAs) by employers subject to state UI laws. The QCEW program also gathers separately sourced data for Federal Government employees covered by the Unemployment Compensation for Federal Employees (UCFE) program.

There were a little over 10.5 million establishments on the 2020 fourth quarter BLS Business Register that served as the source of the BRS’s sampling universe. However, about 1.9 million of these establishments were determined to be out-of-scope for the survey. Establishments that were excluded from the universe:

  • Public Administration & Government (NAICS 92)
  • Private Households (NAICS 814110)
  • U.S. Postal Service (NAICS 491110)
  • Services for the Elderly and Disabled Persons (NAICS 624120) with Establishment Size = 1
  • Unclassified Accounts (NAICS 999999)
  • U.S. Virgin Islands (State FIPS 78)

The 2021 BRS leveraged the technical and collection infrastructure of the ARS. While the synchronization of the two surveys was efficient, it created a need to adapt the BRS sample in accordance with some of the constraints imposed on the ARS sample. Regarding ARS sampling constraints, establishments with one to three employees are never administered the ARS and, of the establishments that are eligible for the ARS, roughly one-third are administered the ARS in any given year. The determination as to which ARS eligible establishments are active for any year’s ARS is based on a random mechanism. During BRS sample selection, active ARS eligible establishments and ARS ineligible establishments were “selectable,” whereas inactive ARS eligible establishments were disallowed from selection, in part as a means of managing respondent burden over time.

To integrate the BRS sample into the ARS framework, each establishment in the BRS sampling universe was categorized into one of the following groups:

  • ARS Eligible Establishments – Active for this Year’s ARS (BRS Selectable)
  • ARS Eligible Establishments – Inactive for this Year’s ARS (BRS Not Selectable)
  • ARS Ineligible Establishments (BRS Selectable)

Each BRS sampling stratum consisted of establishments from one or more of the groups above. Within strata containing only active ARS eligible establishments or only ARS ineligible establishments, sample selection proceeded with no restrictions using simple random sampling. Strata containing only inactive ARS eligible establishments ended up being imputed because there were no selectable establishments and, therefore, no survey results. For any stratum containing a mix of ARS eligible and ARS ineligible establishments, stratum sample sizes were allocated proportionately to each sub-population. Within the stratum’s ARS ineligible sub-population, sample selection then proceeded with no restrictions using simple random sampling. Within the stratum’s ARS eligible sub-population, sample selection proceeded by taking a simple random sample from amongst only the active/selectable establishments.

Note that for any stratum containing both active ARS eligible and inactive ARS eligible establishments, the sample was selected from amongst only the active portion of the stratum. This selection was still considered to be representative of all ARS eligible establishments in the stratum, regardless of active/inactive status, since the determination of ARS active/inactive status was random. Because of this, and because stratum sample sizes were proportionately allocated to eligible/ineligible sup-populations, sample units were equally weighted within (but not across) strata and survey question combinations.

When designing the survey and determining sample sizes, BLS researchers, analysts, and methodologists collaborated to identify the key research goals. As part of this process, a balance had to be struck between producing precise estimates for various establishment aggregations and the costs associated with fielding a sample that could deliver on those goals. Based on the types of administrative data available for establishments on the BLS Business Register and based on the team’s experience analyzing similar establishment-based surveys, research goals centered on creating survey estimates for different combinations of establishment geography, industry type, and/or establishment size. This motivated the decision to choose a design that stratified on all three factors. A decision was then made to define granular strata to keep the strata homogeneous and to facilitate the construction of a wide array of broader composite estimates as functions of the more narrowly defined strata estimates. In the end, for the 2021 BRS, strata were defined jointly on the following factors:

     • State
     {All states plus the District of Columbia and Puerto Rico}
     • Industry Type, Based Primarily on Two-Digit NAICS
     {11-21, 22, 23, 31-33, 42, 44-45, 48-49Mod, 4811, 484, 51, 52-53, 54-56, 61, 62Mod, 71, 72, 81}
     • Establishment Size, Based on Employment
     {1-4, 5-9, 10-19, 20-49, 50-99, 100-249, 250-499, 500-999, 1,000+}

In the industry type list above, industry grouping 48-49Mod excludes industries with NAICS classifications of 484 and 4811. Industry 62Mod excludes industries with a NAICS classification of 624120 that also have an establishment size of one.

In the establishment size list above, all nine “narrow” size groupings are given. Some BRS analyses were conducted using two other broader establishment size groupings – a “medium-width” grouping and a “broad,” or large/small, grouping. The medium-width size classes were 1-19, 20-99, 100-499, and 500+. The large/small groupings were 0-499 and 500+.

At the time the survey was designed, it was clear to researchers and analysts that different industries and establishment size classes would have different pandemic-related programs and policies targeted towards them. Because of this, specific state – industry, state – size class, and industry – size class establishment aggregations were identified as the key levels at which to produce estimates to a certain degree of precision while still being realistic about survey costs and burden. These aggregations were used to drive sample size determination. Specifically, they were:

     • State by Goods-Producing/Services-Producing Industry Type Categorization
     {52*2 = 104 estimation cells}
     • State by Medium-Width Establishment Size
     {52*4 = 208 estimation cells}
     • Modified NAICS Sector by Medium-Width Establishment Size
     {15*4 = 60 estimation cells}
     • Narrow Establishment Size
     {9 estimation cells}

Researcher interest was not, and is not, limited to these aggregations. However, because these were the aggregates initially identified as the most important ones, the sample was designed to achieve a desired precision when estimating specifically for these groupings. Alternatively, the sample was not designed to achieve a desired precision when estimating for other groupings, although in some cases the desired precision was achieved anyway. Note that researchers were certainly interested in estimating with precision at broader levels such as national, state, modified NAICS sector, and narrow size class. But it was easy to see that a sample that allowed for the generation of precise estimates for the four aggregates listed above would certainly allow for the generation of precise estimates for these broader level aggregates.

For each estimation cell within each of the four key aggregates listed above, sample sufficiency counts were determined based on estimating proportions to an agreed upon degree of precision. The formula for the sample sufficiency of an estimation cell was based on the deconstruction of the formula for the variance of a proportion (using simple random sampling within the cell). Estimation cell sample sufficiency counts were then allocated proportionately to all strata within each cell. The result was a set of four “allocated sufficiency counts” per stratum. For each stratum, the maximum of the four sufficiency counts was chosen. Each stratum’s chosen sufficiency count was then divided by an estimated survey response rate to derive a stratum sample size. If the chosen value exceeded the number of selectable establishments in a stratum, the stratum’s final sample size was set equal to its number of selectable establishments. In that case, the truncated sample size was reallocated to other strata mapping to the same estimation cell. Once sample sizes were finalized, samples were selected within each stratum as described earlier when discussing the composition of strata in terms of active ARS eligible, inactive ARS eligible, and ARS ineligible establishments.

Questionnaire

The BRS asked questions in eight topic areas: 1) telework, 2) worker flexibilities and changes in pay, 3) coronavirus precautions and vaccination, 4) changes to square footage or location, 5) supplementing workforce with workers not on the payroll, 6) automation, 7) drug testing, and 8) whether a business received a loan or grant from the government tied to the payroll. For Questions 4, 5, 14, 15, and 16, establishments could have experienced or made decisions about more than one of the situations presented. For example, in Question 4 an establishment could have started flexible or staggered work hours and compressed or alternative work schedules. For these questions, respondents were instructed to select all the response options that applied to them.

For Question 3, respondents were asked what percent of employees currently telework in the following amounts: All the time; some of the time, but not all; and rarely or never. The respondent was asked to select a percent by each option with numbers totaling 100%.

Response Rate

The 2021 BRS consisted of 25 questions to which establishments could respond. A survey was considered usable if the respondent answered at least 5 of the 25 questions. Estimates were generated from usable surveys only.

Of the 322,560 sampled establishments, about 5,300 were deemed uncollectible prior to fielding the sample. These uncollectible establishments were treated as non-responders. Typically, these were establishments that changed status between the time when the universe was drawn and a point in time closer to fielding the sample, such that the establishment’s new status indicated it could not be contacted and/or could not respond to the survey. Thus, the 2021 BRS was administered to about 317,000 establishments.

Of the establishments that were given the opportunity to take the survey, 85,254 participated to some degree, and 82,487 were usable (answered 5 or more questions). Thus:

     • Survey Participation Rate (relative to the full sample) = 26.4%
     • Survey Participation Rate (relative to the collectible sample) = 26.9%

     • Usable Response Rate (relative to the full sample) = 25.6%
     • Usable Response Rate (relative to the collectible sample) = 26.0%

     • Usability Rate Amongst Survey Participants = 96.8%

Data Editing

The 25 answerable questions in the 2021 BRS were organized into 22 numbered questions, where two of the numbered questions contained more than one answerable query. Of these 25 questions, some contained one or more of the following characteristics:

  • “Don’t Know” response options
  • “Not Applicable” response options
  • Free text entry response only
  • Select only one response option
  • Select all that apply
  • None of the above, or similar variants

Two survey questions contained “Don’t Know” (DK) response options. For these questions, estimates were produced for all available response options, including the DK option. Additionally, separate sets of estimates were produced by treating the DK responders as non-responders, thereby producing estimates for only the non-DK response options after adjusting for the DK responders. The latter estimates are the ones being published.

Two survey questions contained “Not Applicable” (N/A) response options. As was the case for questions containing DK responses, estimates for these questions were produced for all available response options including the N/A option, and estimates were produced for only non-N/A response options after adjusting for the N/A responders.

Although there are similarities between the treatment of DK response options and N/A response options, there is one fundamental difference. When adjusting non-DK responses for DK responses, the adjusted estimates still pertain to the full original universe of inference. However, when adjusting non-N/A responses for N/A responses, the adjusted estimates pertain to only the applicable subset of the original universe of inference. Unfortunately, for estimates that involve these N/A adjustments, population counts of the relevant redefined universes of inference are not known and can only be estimated based on survey results.

Occasionally, researchers are interested in estimates that condition the results of one question on specific results from another question. In some of these cases, the conditioning may involve redefining the universe of inference similar to how the universe of inference is redefined for the analyses of questions that involve the adjustment of non-N/A responses for N/A responses.

Analyses of questions that request only free text entry responses are not included in the published results.

There are several questions that request for respondents to check all response options that apply. Often, these questions include a “None of the Above” response option. For these questions, it was possible for respondents to incorrectly select both “None of the Above” and one or more other options. In these instances, survey responses were edited by effectively deselecting the “None of the Above” option.

Finally, note that survey Question 3, which contained 3 sub-questions related to the prevalence of telework within establishments, presented some unique challenges. For Question 3, respondents were asked “What percent of your employees CURRENTLY telework in the following amounts? These answers should total to approximately 100%.” Options provided were:

  • Telework all the time
  • Telework some of the time, but not all
  • Telework rarely or never

Respondents were expected to provide a percent beside each option for the proportion of employees engaged in that type of telework at their establishment. For establishments that did not offer telework, this question did not function as expected. About 18,000 respondents entered 0 for each option. Matching these responses to the 2020 BRS results suggested that these establishments offered no telework and were mistakenly answering 0% for each response option with the intention of indicating that there was no telework available to any employees at that location. To verify this assumption, BLS asked a follow up question to 2,500 of the respondents that answered 0% for each Question 3 option. BLS emailed these respondents, asked them if there is telework available at their establishment, and requested either a Yes or No response. Among these respondents, 92% responded that telework was not offered at their establishment. Using this information, BLS edited a triple zero response to Question 3 to a response indicating that 100% of employees at the establishment telework rarely or never.

Generally, the structure of Question 3 caused some other issues that created estimation challenges. For example, for some responses, the sum of the percentages selected for the three parts did not sum to 100. Other responders may have answered only one or two of the three subparts. Some further response editing was performed to address these issues when there was enough conclusive information to make reasonable edits. Ultimately, because of the challenges presented by this question, a decision was made to simplify the analysis by treating it as a single modified question based on the information gathered from all three subparts.

Estimation Procedure

For the 2021 BRS, the main survey measures of interest included:

  • Proportion of establishments possessing an attribute
  • Number of establishments possessing an attribute
  • Proportion of employees working at establishments that possess an attribute
  • Number of employees working at establishments that possess an attribute

Each measure was estimated within each stratum, provided the stratum included at least one usable responder. Strata estimates were then combined to derive composite estimates for various analysis aggregations, e.g., national estimates state estimates.

For estimation methodology purposes, the primary measure of interest was the estimated proportion of establishments possessing an attribute being assessed by a survey question, e.g., the proportion of establishments that increased telework for some or all employees since the start of the coronavirus pandemic. The other estimates were then calculated as functions of these proportions.

Specifically, within-stratum establishment count estimates were calculated as the product of the stratum’s establishment proportion estimate and the stratum’s total establishment population. Similarly, within-stratum employment count estimates were calculated as the product of the stratum’s establishment proportion estimate and the stratum’s total employment.

Within each stratum, for a particular survey question, establishment proportion estimates were calculated over the sample units that:

  • Responded to at least 5 of the 25 survey questions
  • Responded to the survey question with something other than a response of “Don’t Know”
  • Responded to the survey question with something other than a response of “Not Applicable”

When estimating stratum-level establishment and employment counts, sample unit weights were adjusted upward to account for both unit and item non-response. For these purposes, don’t know responses were treated as item non-response.

Final composite estimation was achieved in stages:

  • Direct Strata Estimation (for strata with at least one usable responder)
  • Preliminary Composite Estimation (for strata with at least one usable responder)
  • Strata Imputation (for strata with no usable responders)
  • Final Composite Estimation (incorporated direct and imputed strata estimates)

Direct strata estimation was conducted for strata and survey questions for which there was at least one usable responder. From these strata-level results, preliminary (i.e., first-pass) composite estimates were produced for establishment proportions (and their variances) for various aggregations of strata, e.g., national, state.

Composite establishment proportion estimates were calculated as weighted sums of strata establishment proportion estimates. Composite estimation weights (i.e., strata weights) were calculated as each stratum’s establishment population proportion relative to the total number of establishments in the composite.

During preliminary composite estimation, the weighted sum was taken over only those strata for which direct strata estimates could be calculated. Therefore, strata weights were adjusted to account for only those strata contributing to a particular preliminary composite estimate.

Preliminary composite estimates for establishment proportions (and their variances) were then used to impute missing strata-level establishment proportions (and their variances). These imputed strata-level establishment proportions were then used to calculate strata-level estimates for establishment and employment counts by multiplying the imputed proportions by their corresponding stratum establishment and employment count populations, respectively.

Lastly, final composite estimation was run using direct strata estimates where possible and imputed strata estimates where necessary. As was the case during preliminary composite estimation, final composite proportion estimates were calculated as weighted sums of strata establishment proportion estimates. However, during final composite estimation, all strata contained values (either directly calculated or imputed) and, therefore, strata weights no longer needed to be adjusted for missing strata.

Final composite estimation of establishment and employment counts were calculated as unweighted sums of the relevant strata estimates.

Final composite estimates of employment proportions were calculated as weighted sums of strata establishment proportions, where strata weights were calculated as each stratum’s total employment proportion relative to the total employment in the composite.

Details about Specific Tabulations

Estimates of Employment
The estimates of employment represent the total number of employees working at an establishment for which a particular situation occurred for at least one worker. It is not an estimate of the number of employees who experienced the situation. For example, the employment estimate for “Establishments that increased telework for some or all employees” (Question 1) is an estimate of the number of employees who worked at an establishment where at least one worker increased telework. It is not an estimate of the total number of workers who increased telework.

Question 3
For Question 3, respondents were asked to provide numeric percent responses to three categories, with numbers adding to 100%. Rather than presenting numeric results, results are being presented as proportion of establishments where all employees telework all the time, all employees telework rarely or never, and some employees telework some of the time. Results are for respondents who entered 100% of employees telework all the time, 100% of employees telework rarely or never, and the remainder of respondents, with these three categories totaling approximately 100% with rounding.

Cross Question Tabulations for Questions 8 and 9
Question 8 asked if the business location required some or all employees to get a COVID-19 vaccination before coming to work on-site and Question 9 asked if the business location offered any employees a financial incentive, paid time off, or permitted employees to remain on the clock to get a COVID-19 vaccination. Both were Yes/No responses.

BLS created cross question tabulations using responses to Questions 8 conditional on responses to Question 9 and using responses to Question 9 conditional on response to Question 8. These can be found as results 9.1-9.4:

  • 9.1: Employers that required the COVID-19 vaccine, conditional on offering a vaccine incentive
  • 9.2: Employers that required the COVID-19 vaccine, conditional on not offering a vaccine incentive
  • 9.3: Employers that offered a vaccine incentive, conditional on requiring the COVID-19 vaccine
  • 9.4: Employers that offered a vaccine incentive, conditional on not requiring the COVID-19 vaccine

Treatment of Missing/Don’t Know
In the tabulation for specific questions, blanks and “don’t know” responses were treated as a non-response to the question by the establishment and were not included in the estimation for the specific question. Non-response to a specific question was treated as described in the estimation methodology section.

Suppressions
A limited number of estimates were not released for reasons of confidentiality and data quality. These estimates are noted with ** on the data tables. Data quality suppressions were based on comparing the margins of error of proportion estimates versus a threshold, where margins of error were based on 95% confidence intervals. Specifically, estimates were suppressed for data quality reasons where 1.96 * standard error (of the proportion estimate) > 0.10.

Precision of Estimates

Sampling Error
The 2021 BRS estimates are statistical estimates subject to sampling error because they are based on a sample of establishments rather than the entire universe of establishments. Standard errors are provided for the construction of confidence intervals around an estimate and for hypothesis testing. The standard errors were derived using the variances generated according to the methodology outlined in the Estimation Procedure and Reliability sections.

Rounding
Estimates of employment and the number of establishments are rounded to the nearest integer. Estimates of percentages are rounded to one decimal place.

Reliability

Variance estimates were calculated for the following survey measures of interest:

  • Proportion of establishments possessing an attribute
  • Number of establishments possessing an attribute
  • Proportion of employees working at establishments that possess an attribute
  • Number of employees working at establishments that possess an attribute

Each variance was estimated within each stratum, provided the stratum included at least one usable responder. Strata variance estimates were then combined to derive composite variance estimates for various analysis aggregations, e.g., national estimates, state estimates.

For variance estimation methodology purposes, the primary variance of interest was the estimated variance of the proportion of establishments possessing an attribute being assessed by a survey question.

Variance estimation for establishment proportions involved (1) the application of the basic formula for the variance of a proportion drawn from a simple random sample and (2) the application of the general formula for the variance of a composite proportion estimator drawn from a stratified random sample. More specifically, regarding (2), the composite variance estimator used for establishment proportions was the sum of the product of each stratum’s relevant variance estimate and the square of its stratum weight, where the sum is taken over all strata in the composite.

For any stratum in which every establishment in the universe was sampled and was a usable responder, the stratum variance was set to zero. Otherwise, for any stratum with more than one establishment in the universe but only one or two item responses for a particular survey question, the stratum variance was set to a default value. This was done to avoid setting these variances equal to zero, which could contribute to underestimating composite variance estimates. The default value was equivalent to the variance that would have been realized if the stratum had two responders, with one responding in the affirmative to the attribute being analyzed and the other responding in the negative. The same default variance was assigned in strata that had to be imputed.

Stratum-level variance estimates for establishment and employment counts were calculated as functions of the corresponding stratum-level establishment proportion variance estimates. For example, because each stratum-level establishment count estimate was calculated as the product of the stratum-level establishment proportion estimate and the stratum’s total establishment population, the stratum-level establishment count variance estimate was set equal to the stratum-level establishment proportion variance estimate times the square of the stratum’s total establishment population. Stratum-level employment count variance estimates followed the same formulation, except strata employment counts were used instead of strata establishment counts.

Stratum-level variance estimates for employment proportions were set equal to the stratum-level variance estimates for establishment proportions, since employment proportions themselves were set equal to the directly calculated establishment proportions. Composite variance estimates for employment proportions were calculated using the same formula as for composite variance estimates for establishment proportions, except using employment-based strata weights instead of establishment-based strata weights.

Preliminary composite variance estimates were subject to the same strata weight adjustments as were preliminary composite proportion estimates. Similarly, final composite variance estimates were calculated using unadjusted strata weights because, at that point, all strata had either direct stratum-level variance estimates or imputed stratum-level variance estimates (i.e., there were no missing variance estimates).

Note that although preliminary composite variance estimates were calculated, they were not used during strata imputation for the 2021 BRS. Instead, as mentioned earlier, imputed strata variances were set according to the aforementioned default formula.

Non-Response Adjustment

The sample design/estimation strategy was to select independent samples within survey strata and then to calculate composite estimates by aggregating across strata results. The sample design stratified on three variables – state, modified NAICS sector, and narrow size class – yielding 52x17x9=7,956 possible survey strata. However, about 10% (807) of the possible survey strata contained no establishments. Of the 7,149 non-empty strata, 8% (572) contained no selectable establishments due to constraints associated with synchronizing the sample with the ARS. Of the 6,577 non-empty strata containing at least one selectable establishment, a sample of at least one establishment was drawn. Of these 6,577 strata, 15.6% (1,026) yielded no usable survey responses, leaving 5,551 strata with at least one usable survey responder, i.e., with at least one unit responder.

The numbers in the previous paragraph summarize the effect of unit non-response on strata usability. However, it should be noted that a survey respondent was considered usable if it yielded responses to at least 5 of the 25 survey questions. Therefore, item non-response created situations where a stratum had at least one usable response for one question but no usable responses for another question. For example, for Question 21, there were 35 strata that had at least one unit responder but no usable item responses to the specific question.

As discussed in some detail in the Estimation Procedure section, to accommodate strata with no usable item responses, final composite estimation was achieved in stages:

  • Direct Strata Estimation (for strata with at least one usable responder)
  • Preliminary Composite Estimation (for strata with at least one usable responder)
  • Strata Imputation (for strata with no usable responders)
  • Final Composite Estimation (incorporated direct and imputed strata estimates)

Specifically, regarding the approach to strata imputation itself, survey strata and question combinations that had no usable item responses had their establishment proportions and variances imputed according to the following hierarchy of composite estimates, ordered from highest priority composite to lowest priority composite going down the list:

  • State, modified NAICS sector, medium-width size class (1-19, 20-99, 100-499, 500+)
  • State, modified NAICS sector, size class large/small (1-499, 500+)
  • State, NAICS goods/services (G, S), size class large/small
  • Census division, NAICS goods/services, size class large/small
  • Census region, NAICS goods/services, size class large/small
  • Narrow size class (1-4, 5-9, 10-19, 20-49, 50-99, 100-249, 250-499, 500-999, 1,000+)

For example, for a particular survey question, suppose a state had no usable responses for modified NAICS sector 11-21 and size class 1,000+. Further, suppose that for the same question and state there were multiple responses for modified NAICS sector 11-21 and size class 500-999. In this case, there would be a viable composite estimate for the stratum’s corresponding state, sector, and medium-width size class composite cell. Therefore, the stratum’s establishment proportion and establishment proportion variance would get imputed from that first composite in the hierarchy.

As another example, for a particular survey question, suppose a state had no usable responses for modified NAICS sector 11-21 for both size classes 500-999 and 1,000+. Further, suppose that for the same question and state there were multiple responses for modified NAICS sectors 22, 23, and 31-33 for size class 500+. In this case, the first composite in the imputation hierarchy would prove inadequate. However, since modified NAICS sectors 11-21, 22, 23, and 31-33 are all categorized as goods-producing services, the second composite down the priority list would yield a viable composite estimate and, therefore, would be used for imputation for the stratum.

It is worth noting that the lowest-prioritized composite in the imputation hierarchy – the narrow size class composite – is the fail-safe since composite estimates existed for all nine size classes for every question.

The previous discussion details strata level response. Within each stratum, for a given survey question, all unit responders that answered the question were assigned the same sample unit weight. For estimates that do not redefine the universe of inference, the assigned sample unit weight was set as the strata establishment (or employment) population divided by the number of usable respondents. As such, within stratum sample weights were essentially the original sampling weights adjusted uniformly upwards for unit and item non-response as well for the elimination of otherwise usable respondents for things such as “Don’t Know” response adjustments or certain kinds of cross-question conditioning. For estimates that did involve the redefinition of the universe of inference, original sampling weights were adjusted upwards for unit and item non-response, but they were not adjusted for the elimination of otherwise usable respondents for things such as “Not Applicable” response adjustments or certain kinds of cross-question conditioning.

As a matter of good practice, sample unit weights were used in the construction of strata estimates. However, because they were set equal for each question within each stratum, the same proportion estimates could have been achieved without using them. Finally, note that although equal sample unit weighting was used within each question and stratum combination, sample units could and did vary across question and stratum combinations.