Earlier Studies

For several decades, Jewish communal planners have been using population surveys to estimate and understand characteristics of Jews in the United States. These surveys have typically been conducted by telephone and have used some variation of a dual-frame sampling design, combining lists of known Jews in the population with randomly selected telephone numbers from the general population to ensure more complete coverage of the Jewish population, including those who are not connected to the organized Jewish community. The random sampling component of this design, referred to as random digit dialing (RDD), was developed by Warren Mitofsky and Joe Waksberg (1978; Brick and Tucker 2007) in the late 1970s and was quickly adopted for Jewish community studies such as those in Los Angeles (1979) and Washington DC (1983). It has remained the primary sampling method for these studies until now. While it is true that for many decades, “Jewish studies took advantage of the latest state-of-the-art survey research practices” (Dutwin 2016), that has not been true for the past 10 years, with Jewish studies slow to adopt the current best practice for collecting representative data, address-based sampling (ABS).

As the use of cell phones has become ubiquitous over the past decade, and as households have abandoned landlines in favor of cell phones, collecting data by telephone has become more difficult and more costly. The percentage of households with access only through a cellular number skyrocketed from about 3 percent in the early 2000s to 57 percent in late 2018 (Blumberg and Luke 2019). In addition, the development of new technologies to identify and/or block incoming calls has led to increasing reluctance to answer calls from unrecognized numbers, contributing to a steady drop in telephone response rates (Dutwin et al. 2018).

At the same time as telephone response rates were declining, new sampling frames providing excellent coverage of U.S. addresses became available to survey researchers; these are typically based on the U.S. Postal Service Computerized Delivery Sequence File (Harter et al. 2016). These frames, known as ABS frames, allow for samples to be drawn based on addresses that correspond to household units, rather than on phone numbers that are only loosely tied to a specific geography. With an address-based sample, the mode of data collection can be paper (through a mailing), web (by mailing respondents a letter to go to a website to respond to a survey), phone (by matching phone numbers to the address), in-person (by having an interviewer visit the address), or any combination of these modalities.

Since the advent of ABS, the trend in household surveys has been to reduce the role of RDD and increase the use of address-based samples (Battaglia et al. 2016), often encouraging respondents to answer via a web instrument when possible.

Concurrent with massive recent shifts in sampling frames from RDD to ABS, there has been a sociological shift that affects the coverage of the other portion of the dual-frame design: obtaining lists of Jewish households. Americans in general, and Jews in particular, have become less attached to religious organizations, with increasing numbers identifying as having no religion or having multiple religions. (Pew [2019] reported that “people who describe their religious identity as atheist, agnostic or ‘nothing in particular,’ now stands at 26%, up from 17% in 2009.”) As formal affiliations decline, the traditional method of sampling known Jews from organizational lists has become more limited.

In 2016, Contemporary Jewry organized a special issue on the future of collecting data on the Jewish community. Multiple contributors (Dutwin 2016; Marker 2016; Sheskin 2016) pointed out the limitations of continuing to use RDD methodology and urged change. A number of alternatives were suggested. Levine and Dranoff (2016) suggested opt-in internet panels, but Dutwin (2016) pointed out, “there is no way to self-generate any reasonable estimate of the Jewish population from such an approach.” Phillips (2016) suggested respondent-driven sampling, but an attempt to implement that in Denver, Colorado in 2019 was not successful due to difficulty reaching out to initial contacts.

Aronson et al. (2016) discussed using a nonprobability method that estimates the local population by extrapolating from national models, using little or no local survey responses.Footnote 1 They incorporate data from many respondents to national surveys, but restricted to those who say their religion is Judaism. Thus they exclude those Jews who do not identify religiously with Judaism, which is an ever growing and important component of the Jewish population (Pew 2021, 2013). Instead, they model their estimates of the total Jewish population based on the relationship observed in the 2013 Pew study of the Jewish community.

Their data are on average 4 years out of date (they combine surveys from the preceding 7 years), thus missing recent demographic changes. Their key assumption is that there is a constant relationship between the number of Jews who identify as Jewish by religion (JBR) in national surveys and the number of Jews who do not identify by religion (JNR) in each local community. For example, for the 2020 Baltimore Jewish Community Study, they assumed 20 percent of Jewish adults were JNR, based on the 2013 Pew study (Boxer 2020). This constant relationship is demonstratively not true for nearby counties, for example in the five Philadelphia area counties covered by the 2019 Jewish Population Study of Greater Philadelphia (Marker and Steiger 2020), for which the proportion of JBR are shown in Table 1. In two of the counties, almost half of Jewish adults do not identify as Jewish by religion, while in adjacent Montgomery County, just over a quarter so identify. (The recent Pew [2021] study found 27 percent nationally not identifying as Jewish by religion, although the definitions used in each study are not exactly the same.) Assuming a constant ratio would either undercount Jews in Chester and Delaware or over-count in Montgomery. This key assumption of a constant ratio is handled differently in each study, without any guidance on how wrong it might be. The results are total population estimates of unknown accuracy based on little or no current local data.

Table 1 Jewish adults in Greater Philadelphia identifying as Jewish not by religion

The approach used by Aronson et al. typically includes a large number of respondents, but almost all are from local Jewish lists (the exception being those with distinctive Jewish names [DJNs]). As Dutwin (2016) commented, “[T]here is nothing wrong with these designs, if the goal of the research is to understand affiliated Jews. On the other hand, no type of design is more dangerous in leading to errant results if one desires to make claims about all Jews in a given area.” Unfortunately, despite these shortcomings, many federations have employed these techniques due to their relatively low cost.

In the same issue of Contemporary Jewry, Marker (2016) proposed using ABS. This methodology gives all Jewish households a chance to be included in the study, avoiding the underrepresentation of the non-affiliated and those who moved to the area while retaining out-of-area telephone numbers that undermine the accuracy of the model-based or RDD-based studies described above. (It is possible to buy lists of cell phone numbers based on usage at certain cell phone towers, but we are not aware of any Jewish studies using this approach.) ABS also provides the opportunity for geographic targeting of surveys to a level never before possible. Jewish communities typically include a few neighborhoods with a high density of Jews, but the boundaries for such neighborhoods are not respected by telephone numbers or other sources of lists. With ABS, we know the location not just of responding households, but also of nonresponders, allowing for improved accuracy for identifying differences between Jews living in different neighborhoods.

Using Address-Based Sampling (ABS) to Survey the Jewish Community

ABS has now been used successfully for two Jewish population surveys, the 2019 survey of Greater Philadelphia and the 2020 national Jewish survey conducted by the Pew Research Center. Pew (2021) explained their rationale for switching to ABS as follows: “By 2020, however, response rates to telephone surveys had declined so precipitously that random-digit-dialing by telephone was no longer the best way to conduct a large, nationwide survey of a small subgroup of the U.S. public.” (New York City’s Jewish population study is also using ABS, but has been delayed as of this writing due to the coronavirus pandemic.) The data collection for both the Philadelphia and Pew studies were conducted by Westat. The remainder of this paper describes the successful application of this methodology, and identifies lessons learned for future studies.

2019 Greater Philadelphia Jewish Community Portrait

The Jewish Federation of Greater Philadelphia (JFGP) wanted to estimate the size of the Jewish population in the five-county Greater Philadelphia area (Philadelphia, Bucks, Chester, Delaware, and Montgomery counties). The goal was to describe a wide range of characteristics of Jewish residents, overall, for each county, and for a set of eight local communities (referred to as Kehillot). Figure 1 shows the boundaries of the target area. The gray areas in the western suburbs were assumed (based on JFGP contacts in each county) to have few if any Jews and thus were excluded from the survey.

Fig. 1
figure 1

Boundaries of five counties (blue lines) and eight communities (Kehillot, each in a separate color)

Beyond the usual difficulties of conducting a population study, surveying the Jewish community brings additional complications. The Census Bureau, which is the source of demographic data often used in designing survey samples, does not collect information about religion in any of its surveys. Many Jews do not identify as religiously Jewish, but rather as culturally or ethnically Jewish, so may not answer “Jewish” to a question of “What religion are you?” Further, many Jews do not connect to any local Jewish organizations, so they are not likely to be found on lists of likely Jews from synagogues, Jewish community centers, or other organizations.

Previous Philadelphia Study

The previous study of the Greater Philadelphia area Jewish community was conducted in 2009. An RDD sample of landline telephone numbers was selected and interviews were conducted using computer-assisted telephone interviewing (CATI). (Earlier studies in 1984 and 1997 had also used RDD with CATI.) This was supplemented by lists obtained from local Jewish organizations. Due to changing telephone usage patterns, the use of a landline-only RDD survey in 2009 excluded the households that either did not have a landline telephone number or had a landline number but received all or almost all calls on cell phone numbers (e.g., only used their landline for a fax machine), estimated nationally to be 41 percent of households (Blumberg and Luke 2009). The excluded percentage was even larger for select subpopulations, for example, younger adults.

2019 Philadelphia Study

Collecting data by telephone has become more difficult and more costly as the population has transitioned from landline telephone service toward primarily or exclusively cell phone service. As stated previously, new technologies to identify and/or block incoming calls, and respondents having less “free time” to answer surveys has fueled the need to change the approach to collecting population-level data (Olson et al. 2019).


Sample Design. The 2019 Philadelphia study combined ABS with 50 lists from local Jewish organizations. The ABS frame for this study, which is based on the U.S. Postal Service’s Computerized Delivery Sequence file and is maintained by Marketing Systems Group (MSG), consisted of the set of all residential addresses in a list of ZIP codes that were identified by JFGP as likely having at least some Jewish population. Based on this knowledge from JFGP, using this frame provided almost complete coverage of Jewish persons living in households in the five-county area.

The 50 lists were deduplicated and matched to the ABS frame to partition it into separate strata for low eligibility list addresses, high eligibility list addresses, and non-list addresses. The low eligibility lists were either of college students (more likely to have moved) or a purchased list of “likely Jewish households” from a market research firm, where the proportion of households eligible to participate was expected to be low. The high eligibility lists came from organizations such as synagogues, Jewish community centers, and Jewish social services providers. Additionally, the sample was stratified by the eight Kehillot to facilitate geographic estimation. Out of 1.6 million residential addresses in the eligible ZIP codes, 3 percent were placed in the high eligibility stratum (where the proportion of households eligible to participate was expected to be high) and another 5 percent in the low eligibility stratum. A key to unbiased estimation is that respondents in each stratum were only weighted up to represent others in that stratum, so for example, Jews who are well connected to the community and thus appear on at least one high eligibility stratum list are only weighted up to represent others also found on such lists. Thus with ABS, the responses from these list strata are weighted up to only represent 8 percent (3% + 5%) of all households. The rest are represented by respondents in the non-list (generally less affiliated) stratum. This represents a major improvement over the approach used by Aronson as described earlier, where households on Jewish lists were assumed to represent all Jews, including less affiliated Jews.

One caveat of the ABS approach is that the sample represents the household population but does not include Jewish adults who are living in nursing homes, military barracks, and other institutional housing, nor does it include homeless Jews. This caveat applies to other sample designs as well. Those living in non-institutional residential settings, however, including most assisted-living facilities and non-barracks housing on military bases, were eligible for inclusion.

In addition to the major improvement in coverage of the target population, ABS provided additional improvements over the previous design. ABS allows for specific, accurate targeting of geographic areas of interest. Each ZIP code was connected to a specific Kehillot, allowing us to ensure a specific sample size was allocated to each. Each address is associated with a specific county. Not only does this ensure a sufficient number of completed cases in each target geography, but it also facilitates using area-level characteristics in statistical adjustments aimed at reducing potential nonresponse bias. With RDD, the characteristics available for nonresponse adjustments are aggregate characteristics for large geographic areas (typically, the primary ZIP codes associated with the telephone exchange). With ABS, we know the county, Kehillot, and (through geocoding) the census tract in which every nonresponding address is located, and from the American Community Survey (ACS) we can obtain characteristics of their census tract (e.g., percent renters, average income). This allows for the use of area-level characteristics for the particular area in which the address is located, resulting in nonresponse adjustments that are likely to be more effective in reducing bias in survey estimates.


Data Collection. Data collection occurred from late January through July of 2019. Each sampled address was mailed an initial invitation to complete the screener via the web, with a unique ID and PIN for each address. The mailing contained a $1 cash incentive to encourage participation. Households could request a hard copy screener survey but were encouraged to use the web. Follow-up postcards encouraged web response, followed by a paper copy of the screener mailed to remaining nonrespondents. As a result, 60 percent of screener respondents chose to respond via the web, including older adults.

In each letter and postcard, a toll-free telephone number and email address were provided for anyone with a question about the survey. This number was also available if someone preferred to answer the questionnaire over the telephone. Telephone data collection was only used for a few interviews (both English and Russian), but the option was provided to expand the potential methods of completing the survey.

The screener and the survey were offered in both English and Russian. A number of Jewish families from Russia and the former Soviet Union have moved to the Greater Philadelphia area over the last 40 years and many still speak only Russian at home. This necessitated offering a Russian language survey alternative to encourage this group to participate. Both the paper copy and web instruments were offered in the two languages.

Data were collected through a two-phase design, where the screener was used to determine whether any adult in the household was Jewish; if so, the household was eligible to complete the main questionnaire. Respondents were promised a $10 incentive upon completion. Web respondents to the screener continued seamlessly into the main questionnaire, while eligible paper screener respondents were mailed a paper copy of the main questionnaire, but were still offered the option of answering on the web. If the paper screener indicated that the only residents were ages 65 or older, then a tailored version of the paper questionnaire was sent with a larger font size to improve readability, eliminating all questions related to children in the household. Eighty percent of main survey respondents chose to respond via the web.


Who is Jewish? In the 2009 study, two questions were used to identify Jewish households:

  • “Is there anyone in the household who considers himself or herself to be Jewish?”

  • If no, then the screener respondent was asked whether either of their parents was Jewish, and if so, the household was classified as Jewish.

Research in Jewish demography (Pew 2021, 2013; Charme et al. 2008; Horowitz 1998) has revealed that there are multiple ways in which some people consider themselves Jewish, and that this determination may change over time. As a result, for 2019 we used a more detailed set of questions to determine whether the household qualified as Jewish. Respondents to the household screener were asked:

  • “What religion are you?”

  • (If not Jewish) “Are you Jewish by religion?” “Are you Jewish by ethnicity or heritage?” “Are you Jewish by culture?”

  • (If no to all of the above questions) “Were you raised Jewish or did you have a Jewish parent?”

  • (If no to the above question) “Does any other adult in the household consider himself/herself Jewish by religion/ethnicity or heritage/culture or had a Jewish parent?”

Jewish households were defined as those in which the respondent self-identified as Jewish (religiously, ethnically, or culturally) or had a Jewish parent or was raised Jewish (and didn’t have another religion), excluding Messianic Jews; or whose spouse/partner/other adult identified as Jewish (religiously, ethnically, culturally, or had a Jewish parent and no other religion).

These questions do not necessarily reflect an expansion of the definition of Jewishness relative to 2009, but they do clarify the many ways in which a respondent might identify. We believe this provides a clearer path to identifying all those who are Jewish. Also, by providing a neutrally worded initial question, we believe we encouraged those of all religions to complete the screener in a way that the 2009 questions did not.

Screening for those who were raised Jewish or had a Jewish parent, but who do not currently identify as Jewish (or any other religion) does provide a more inclusive definition of Jewishness. By examining the responses of those who qualified due to this last question, we found that they resemble many others who identified as Jewish via the first 3 questions. For example, they light Chanukah candles (21%), attend Passover Seders (13%), and attend High Holiday services (7%). While this group only represented one to two percent of Jews, they behave similarly to some other Jews and we found they were worthwhile to include in estimates of the Greater Philadelphia Jewish population.


Collecting Data in 2019. Collecting data on the Greater Philadelphia Jewish population in early 2019 posed several challenges. The Tree of Life synagogue attack in Pittsburgh, PA took place October 27, 2018. While we were collecting data in Philadelphia, not Pittsburgh, there are close ties between the two communities. During the data collection period (April 27, 2019) the Poway synagogue attack in California occurred, further putting the American Jewish community on edge.

Throughout the data collection period there were almost daily news items about Russian interference with the 2016 election, the Trump administration’s close ties with Russia, and possible impeachment. With the household screener offered in English or Russian, we encountered some suspicion and pushback from sampled households. For example, one sampled household suggested that we were “doing just what the Nazis did.” Another household worried that “given the results of the 2016 election, maybe Russian was going to be the second official language of America.”


Response Rates. A total of 79,486 addresses were sampled, from which 10,787 households completed the screener. There were 2634 screeners identifying eligible households (at least one Jewish adult) of whom 2119 completed the main survey. Table 2 provides detailed response rates overall and by stratum.

Table 2 Response rates by stratum and overall

Overall, 12.2 percent of households completed the screener, with 78.6 percent of those eligible completing the main survey questionnaire. These response rates were similar for both the non-list ABS and low eligibility list strata. The high eligibility list stratum, however, had substantially higher participation rates, with 25.6 percent completing the screener, and 82.8 percent the main instrument. While the non-list ABS and low eligibility list strata had similar response rates, that is not true for eligibility; while only an estimated 12 percentFootnote 2 of households in the non-list ABS stratum were determined to be Jewish households, 47 percent of those in the low eligibility list stratum were Jewish households, and 87 percent of those in the high eligibility list stratum.

Studies have demonstrated that a well-designed ABS study attains response rates higher than those using RDD methods (Olson et al. 2019; Montaquila et al. 2013), especially when one considers that the non-list ABS frame is absent those who are most likely to respond positively to the named sponsor. In addition, knowing the location for ABS nonrespondents allows for adjustments in the weighting process that reduces a major source of potential bias in telephone-based data collection.


Estimating the Size of the Jewish Population. Each respondent was weighted with a final weight that accounts for the probability of selection, along with adjustments for eligibility and nonresponse. The weights were adjusted to county-level controls derived from the American Community Survey. The resulting estimates are much more accurate than those from earlier methods. Responses from Jewish respondents on high eligibility lists are only weighted up to reflect others on those lists. The same applies to respondents from the low eligibility lists. Finally, respondents from the non-list stratum cover Jews who are not found on any of the lists. In this way we ensure that responses from highly affiliated Jews are not used to represent those who are not affiliated with the Jewish community. In addition, since the location for all nonrespondents is also known, any differential response rates across geographies are controlled for in the weighting process, reducing the potential for bias in the estimates.

The methodological improvements in the 2019 survey approach contributed to significantly larger estimates of Jewish population in Greater Philadelphia relative to 2009. Table 3 shows the estimates from the 2009 survey, the 2019 estimates using a comparable definition of Jewishness (i.e., without the last question about being raised Jewish or having Jewish parents), the percent change, the best estimate for 2019 (including the more expansive definition of Jewish households), and a 95 percent confidence interval on that estimate.

Table 3 Jewish population estimates

As mentioned previously, the addition of the screening question on being raised Jewish or having a Jewish parent did not have a large impact on the estimated size of the Jewish population. The number of Jewish households in 2019 is over 60 percent larger, and the number of people in such households and the number of Jewish adults are both over 70 percent larger than the 2009 estimates. During this same time period, the overall population of the five-county area increased by only 3.5 percent.Footnote 3 While it is possible that the Jewish population has been increasing faster than the general population, that alone is likely not the only explanation for the large difference since 2009. Given the lack of evidence of massive recent growth, we attribute most of this change to the improved methodology. Leadership in the Philadelphia Jewish community agreed that growth over the previous decade was not responsible for the change; rather, they agreed that the change reflects the fact that the 2019 study captured parts of the community not previously included.

2019–2020 Jewish American and Black American Religion Studies, conducted for the Pew Research Center

The Pew Research Center was interested in conducting two separate surveys that required screening for eligible respondents; one of Jewish Americans, and one of Black Americans. A sample design was developed to reach potentially eligible respondents of either survey with the same mailing protocol and same screening survey. Those completing the screener who were identified as eligible for either of the two main questionnaires were then invited to complete the relevant main questionnaire. Respondents eligible for both main questionnaires were invited to the Jewish religion survey 80 percent of the time, and to the Black religion survey 20 percent of the time. This sharing of screening costs was suggested in Marker (2016), who suggested federations teaming with the local archdiocese (or other religious organization) to share screening costs.

The screener survey was conducted in three languages: English, Russian, and Spanish. Selected addresses were sent mail invitations for the screener survey in English and, in some cases, a second language based on demographic characteristics of the area or information appended to the sampling frame. The percentage receiving English and Spanish materials was 18 percent, and the percentage receiving English and Russian materials was 5 percent; the remaining 77 percent received the screener only in English. Regardless of the letter received, respondents could choose any of the three languages on the website to complete the screener survey. The main questionnaire for the Jewish Religion survey was available in English and Russian, and respondents were sent letters and surveys for the main questionnaire in the language in which they had completed the screener. The small number of eligible respondents who completed the screener in Spanish were sent the main questionnaire in English (only the Black American Religion study main questionnaire was available in Spanish).

Even with screening interview sharing and the stratification of the sampling frame, the assignment of differential sampling rates to the strata was a critical design component because of the rareness of the eligible populations. Because the Jewish population was much rarer than the Black population (approximately one sixth the size) and the number of completes planned for the Jewish religion survey was larger than for the Black religion survey, the stratification mainly focused on identifying areas with a high density of people who are Jewish. Since the U.S. government does not collect data that classifies people by religion, two alternate sources were used in order to identify these areas.

The first source for identifying areas with a higher density of people who are Jewish was a file made available to Pew Research Center from Brandeis University, which provided pre-release data for this purpose (Pre-Release Estimates, July 2019). The available tablesFootnote 4 were at the county or county-group level and had estimates of both the total number of adults and the proportion of adults who identified themselves as Jewish by religion for each county or county-group. The second source was data from surveys conducted by Pew Research Center back to 2013 that contained the respondent’s religious affiliation, sampling weights, and the respondent’s ZIP code; this information was used to summarize the data to produce estimates of the proportion of Jewish adults at the ZIP code level. These data were combined with Census Bureau population counts at the ZIP Code Tabulation Areas (ZCTAs) level in order to develop a stratified sampling plan.

First two strata were developed:

  • Strata 1—ZCTAs with the higher proportion of Jewish population within counties where the estimated proportion of people who are Jewish exceeded 3 percent.

  • Strata 2—Counties or county-groups where the proportion of people who are Jewish was less than 3 percent.

The last step in the stratification further stratified the areas within each of those strata. For stratum 1, substratum boundaries were determined by sorting ZCTAs by the estimated percent Jewish from the Pew surveys and grouping the ZCTAs to form two substrata based on the square root of the ZCTA-level estimate of the number of people who are Jewish. For stratum 2, the substratum boundaries were based on the estimated number of people who are Jewish in the county or county-group.

We examined the expected design effect of the weights due to differential sampling for several scenarios with different sizes and number of substrata. The goal of the analysis was to identify the sample allocation, number, and size of strata that minimized both the design effect of the weights and the sample to draw while meeting the goal of the number of completed Jewish main questionnaires, taking into account the assumed Jewish adult eligibility rate and both screener and main questionnaire response rates. The analysis showed that splitting each of the strata (where stratum 1 had already been split into two substrata) into three substrata achieved a low design effect without a large number of strata, and met the goal for the number of completed surveys based on the assumed response rates and the percentage of addresses of responding households that would be eligible for the Jewish and Black Religion surveys.

The final stratum indicator is denoted CSTRATA. Table 4 shows the nine CSTRATA (three strata by three substrata) and some of the characteristics estimated for each CSTRATA. By definition, CSTRATA 3–1, 3–2, and 3–3 are counties with the proportion of Jewish adults by religion less than 0.03. The ZCTAs in CSTRATA = 1–1 and CSTRATA = 1–2 had the highest density of people who are Jewish.

Table 4 Sampling stratum definition

The last step in preparation for sampling was to translate the geographic areas defined by ZCTAs in CSTRATA = 1–1 to 2–3 into ZIP codes (within counties) for the creation of CSTRATA in the ABS frame. The geographic translation ensured that each ZIP code within a county was assigned to only one stratum. There was no need to use ZIP codes or ZCTAS in CSTRATA = 3–1, 3–2, and 3–3 because these strata were defined at the county-group level.

Definition of Jews

The screening survey included three questions amidst other content that would make a respondent eligible for the main Jewish religion survey. When completing the screening survey on the web, once one of the three questions was answered in an eligible way, the remaining eligibility questions were not asked. When completing the screening survey on paper, responses to all three questions were obtained.

The first eligibility question asked was, “What is your present religion, if any?” Response options were, “Protestant (for example, Baptist, Methodist, Nondenominational, Lutheran, Presbyterian, Pentecostal, Episcopalian, Reformed, Church of Christ, etc.), Roman Catholic, Mormon (Church of Jesus Christ of Latter-day Saints or LDS), Jehovah's Witness, Jewish, Muslim, Buddhist, Hindu, Atheist, Agnostic, Something else, specify: [text box], Nothing in particular” Anyone selecting the response of Jewish was considered eligible. Additionally, any written responses in the text box were reviewed for indication of Jewish religion.

The second eligibility question asked was, “ASIDE from religion, do you consider yourself to be any of the following in any way (for example, ethnically, culturally or because of your family’s background)?” For each of four religious traditions (Jewish, Catholic, Mormon, Muslim), the two response options were, “Yes, consider myself this” and “No, do not consider myself this.” Respondents selecting “Yes, consider myself this” for Jewish were considered eligible.

The third eligibility question asked was, “Please indicate whether you were raised in any of the following traditions or had a parent from any of the following backgrounds.” For each of four religious traditions (Jewish, Catholic, Mormon, Muslim), the two response options were, “Yes, was raised in this tradition or had a parent from this background” and “No, was not raised in this tradition and did not have a parent from this background.” Respondents selecting “Yes, was raised in this tradition or had a parent from this background” for Jewish were considered eligible.

Response/Cooperation Rates

Screening survey

This survey followed a four-mailing sequence recruiting for the screening survey. The sample was divided into two replicates, with adjustments made to the quantities in the second replicate to meet overall recruitment goals of the survey. All materials were identified as being sent from Pew Research Center.

Mailing 1—An invitation to complete the survey online, with $2 pre-incentive. This was sent in a #10 envelope for the first replicate and a 9 × 12 envelope for the second replicate. The invitation provided a description of the short screening survey content, and did not include any references to race or religion, or information about the possibility of eligibility for additional surveys.

Mailing 2—A postcard reminder to complete the survey online.

Mailing 3—A second invitation letter to complete the survey online, sent to nonrespondents. This was sent in a 9 × 12 envelope for the first replicate and included an additional $1 incentive, due to concerns that some of this replicate may not have opened their first mailing since it was sent in a smaller envelope and arrived in the week before Thanksgiving.

Mailing 4—This mailing contained a paper copy of the screening survey, and was sent to all nonrespondents. It included a postage-paid envelope to return the completed survey.

Table 5 below shows the unweighted number of completed screeners, the number of known eligible-address nonresponse cases such as refusals, the number of unknown-eligibility cases as a result of no response, and the number of known ineligible address cases such as undeliverable mail.

Table 5 Unweighted count of summarized disposition codes for sampled addresses

There are alternative AAPOR-approved ways to estimate screener response rates based on different assumptions of the proportion of the unknown eligibility group assumed to be eligible. The most conservative (RR1) yields a 19.2 percent response rate, while the more common RR3 yields a 25.7 percent response rate. The preferred estimate, based on experience from multiple sample surveys that roughly 13 percent of addresses nationally are vacant or not residential, yields a 20.3 percent response rate.

Main questionnaire

For those who were eligible for a main questionnaire, they were asked to continue through if completing by web, or were mailed the relevant paper main questionnaire if they completed the screener by paper. With the invitation to the main questionnaire, respondents were provided more information about the topics of the questionnaire, and also promised an additional incentive upon completion. The promised incentive was $10 or $20 (determined experimentally) for completing the web survey, and $50 when completing a paper copy of the survey.

If the respondent broke off while completing the web main questionnaire they were sent a letter reminding them of their web login information. For both web and paper screener completes, eligible nonrespondents were sent a final copy of the relevant paper main questionnaire by FedEx.

A total of 7216 screening surveys were found to be eligible for the Jewish religion survey. Of these, 5944 completed the Jewish religion survey, a conditional response rate (AAPOR RR5) of 82.4 percent.

The overall response rate is the product of the screener response rate and the conditional response rate for that survey. Using the preferred version of the RR3 unweighted screener response rate, the overall response rate is 16.7 percent (20.3%*82.4%). (The more common version of RR3 yields an overall response rate of 21.2 percent.)

Response rates by demographic groups

Response rate differed across demographic groups. Screener response rates were largely consistent across subgroups, but were somewhat higher in the Northeast (25.5%) and lowest where mailed in Spanish (13.6%) or where the proportion of Orthodox was highest (14.8%). Similarly, main survey response rates were quite consistent, with somewhat lower rates in those counties with the lowest number of Jews (71.2%). Nonresponse adjustments take this into account so that weighted estimates reduce biases that would otherwise be introduced.

Summary

The methodology used for the 2019 survey of the Jewish community of Greater Philadelphia and the Jewish Americans in 2020 Study represents a major change from past surveys. They are the first studies of the Jewish population in the U.S. to use address-based sampling. They also used improved definitions for identifying Jews. The approaches described in this article affected improvements by:

  • Expanding coverage to include virtually all households, including those not affiliated with the local Jewish community

  • Improving response rates

  • Obtaining better geographic targeting of communities of interest

  • Using neutral, unbiased language to identify Jewish households

  • Employing a more inclusive definition of Jewish households (for Philadelphia)

We have laid out reasons why we strongly believe that future studies of the Jewish community should also include these methodological improvements. In applying these methodologies, there are many decisions that will have to be made due to the unique characteristics of each local community, including:

  • There is a trade-off between using the lists of known Jews or likely Jews and the non-list ABS sample. While it is more cost effective to obtain completed responses from the list sample, they represent only a portion of the Jewish community, namely, those who are already involved or connected in some way. The non-list ABS sample is a less-efficient (in terms of raw numbers of completed surveys), and therefore a more expensive approach; but it tends to identify Jewish households that are less connected to Jewish institutions, thus improving the representativeness of the Jewish population and adding more new information than other surveys.

  • It may not be worthwhile to include low eligibility lists. Low eligibility lists (purchased “likely Jewish” names and lists of college students) are not as useful at targeting Jewish households. Only if there is limited coverage of the Jewish population on high eligibility lists, or if college students are of particular interest, are these lists likely to be worthwhile.

  • Sponsorship should be carefully considered in communication materials to list and non-list ABS samples. Prominently identifying a Jewish organization as the sponsor can bias participation, but not including a recognized sponsor can adversely impact response rates. Ideally, a neutral organization (or group of sponsors crossing religious boundaries) can be named as the sponsor.

  • Screening questions need to be designed carefully to capture a broad reflection of Jewish households. It is important to recognize that Jewish identity encompasses not only religion, but also ethnicity, heritage, culture, and upbringing.

  • Offering the survey in multiple languages should be considered in the context of current events. The decision on which languages to offer for both the screener and main study is typically made based on the population being surveyed. However, it is important also to consider the current political environment and how the general population will react to this choice of languages. They may not understand the rationale and may infer other meanings that can affect response rates.

  • Multiple modes of data collection, used in a sequential manner, are preferable to offering a single mode of response. We found that all age groups participated via the web, but some populations lack access to the internet, are not comfortable responding to surveys online, or lack the understanding of how to do so. Nonresponse follow up can be done by mail and/or telephone. Research has found (Montaquila et al. 2013; Brick et al. 2011) that mail is more effective than telephone for nonresponse follow up in mail-based ABS studies, but selective telephone follow up may be included as an additional step to gain cooperation.

  • ABS surveys are highly effective data collection tools (Harter et al. 2016) for religious, or other, communities of all sizes. In general, ABS surveys do not cost more than RDDFootnote 5, but one must be careful to compare approaches that include the non-affiliated Jewish population, not simply those already connected to the community. ABS will generally be able to be completed in a shorter period of time, because telephone survey completes per week are limited by the number of interviewers. ABS data collection for studies such as those described here can generally be completed within two months, assuming USPS mail delivery times return to pre-pandemic levels (but in the case of Philadelphia we extended the data collection period to gather a larger number of completed cases).

  • For smaller communities it will be difficult to decide whether to spend the money for an accurate survey, given their generally lower level of resources. If they can team with other religious denominations in the community, the costs can be significantly reduced, making ABS possible.