Script / Documentation
Welcome back to Just Facts Academy, where you learn how to research like a genius.
Remember, applying our 7 Standards of Credibility you learned in the first video series, and putting in the effort moves you way ahead of the pack. And now, we’re taking it up a notch!
So let’s add to your lead and shine the spotlight on surveys.
Quick survey: Do you believe surveys are reliable?
Some believe surveys—also known as polls—can never be accurate due to some notable failures.
Others believe any poll that tells them what they want to hear.
And unfortunately, too many trust their personal pollster, aka, gut instincts.
The fact is surveys are indispensable tools for discovering valuable information we can use in practical ways. However, we need to understand their inner workings in order to assess their accuracy.
So, let’s take a closer look at surveys, learn the right questions to ask, and if you stay tuned, we’ll give you free access to a Just Facts Academy secret weapon.
It’s expensive and frequently impossible to collect information from every person in a town, state, nation, or the world so that we can know about their needs, wants, standards of living, health status, political views, and thousands of other insightful measures.
Want to give it a shot? I didn’t think so.
Fortunately, we can obtain such data through scientific surveys, and there’s often no other reasonable way to get it.
In fact, people commonly use data from surveys without even knowing it. Official data on crime,[1] education,[2] employment,[3] the economy,[4] and an enormous array of other statistics are commonly collected through surveys.[5] And this information is widely used by everyone from students and CEOs to farmers and world leaders.[6]
If conducted correctly, surveys can achieve considerable accuracy while polling only a tiny portion of a population. As the book Statistics for K–8 Educators states, “a national random sample of 1,000 people can accurately represent 200 million people.”[7]
I know that may sound crazy, but the textbook Statistics: Concepts and Controversies explains why it’s true:
Okay, that makes sense, but then why are polls often wrong?
Here are the usual culprits:
First, they can be way off base if they don’t use random samples of respondents.[9] [10] [11] Remember the kernels example? Oops, wrong kernels. There we go. The scoop is only an accurate representation of the whole if all the kernels are mixed well.
Consider internet, mail, and email surveys. These are frequently suspect because although they may seem random, the people who respond are typically different in important ways from the people who don’t.
As explained in the textbook Mind on Statistics, “Surveys that simply use those who respond voluntarily are sure to be biased in favor of those with strong opinions or with time on their hands.”[12]
Seemingly blind to that reality, a so-called fact checker named PolitiFact repeatedly cites an article in a science journal claiming to “assess the scientific consensus on climate change through an unbiased survey of a large and broad group of Earth scientists.”[13] [14]
However, the article was based on an unweighted internet survey with a 31% response rate without any way of knowing what the other 69% think or why they didn’t respond.[15] [16]
Does that seem broad or unbiased to you?
Bottom line: Even if a poll was conducted by scientists, surveys scientists, and is published in a science journal, that doesn’t mean it’s a scientific poll.[17]
A book published by Pennsylvania State University Press says it well: “Scientific polls use sampling procedures where random samples are used, that is, where each individual in the group has an equal chance of being selected into the sample, or where some variation on this pattern is used to account” for any differences.[18]
But there is a way to account for such differences. It’s a process called weighting. This involves “adding extra ‘weight’ to the responses of underrepresented groups.”[19]
For example, if 40% of respondents for a poll of registered voters are females, the pollsters may place more mathematical weight on the responses of these women, because 53% of registered voters are women.[20] Pollsters also perform weighting balances on race, age, education, income, and many other variables.[21] [22] [23]
However, weighing does not guarantee that a poll’s results are accurate—particularly if respondents differ in ways that transcend the factors that are weighted.[24] [25] [26]
Here’s another important factor to consider. Be aware that even scientific surveys with perfectly random samples can still be inaccurate if respondents have reasonable motives to lie. For example, unauthorized immigrants who participate in polls sometimes claim that they are citizens to hide that they are in the U.S. illegally.[27]
Finally, as explained in an academic book about statistics:
The wording of questions is the most important influence on the answers given to a sample survey. Confusing or leading questions can introduce strong bias, and changes in wording can greatly affect a survey’s outcome. … Don’t trust the results of a sample survey until you have read the exact questions asked.[28] [29]
So, let’s recap. Before you blindly trust or dismiss a survey, it’s important to ask these questions:
- How was the sample gathered? Was it truly random and scientific, or were certain people more likely to participate than others?
- Were the results weighted? If they were, could the respondents differ in ways beyond the factors that are weighted?
- Is there a reasonable motive for the surveyed population to lie?
- What is the exact wording of the questions? Were they phrased in a way that could bias the answers?
If the survey doesn’t present the exact questions and any other vital info about how the data was gathered, that’s a big red flag. It’s best to file it in the “Something’s Fishy” folder.
And finally, what’s the margin of error? This is often included in poll results, but there’s more to it than meets the eye. First, this measure of uncertainty only accounts for something called the sampling error. It doesn’t include all the other pitfalls we’ve already discussed.[30] [31] [32]
Without getting into the details, sampling error is related to the number of people in the survey. Generally speaking, the more people surveyed, the lower the sampling error.[33]
But be aware, this is just a rough high-end estimate for the margin of sampling error. It could actually be much smaller in certain cases.[34] If you really need to be precise, have we got something for you.
Are you ready? And now presenting the math-teacher-approved, Ph.D.-verified Just Facts Academy Margin of Error Calculator. Using this Excel file, you can determine a more precise sampling margin of error to judge the reliability of other surveys or create your own.
Head over to URL on the screen and download it today.
Armed with your JFA Margin of Error Calculator and equipped with the right knowledge and questions, you can now sort out surveys and probe polls like a genius.
Footnotes
[1] Report: “Criminal Victimization, 2021.” By Alexandra Thompson and Susannah N. Tapp. U.S. Department of Justice, Bureau of Justice Statistics. September 2022. <bjs.ojp.gov>
Page 17:
Survey coverage
The Bureau of Justice Statistics’ (BJS) National Crime Victimization Survey (NCVS) is an annual data collection carried out by the U.S. Census Bureau. The NCVS is a self-report survey that is administered annually from January 1 to December 31. Annual NCVS estimates are based on the number and characteristics of crimes that respondents experienced during the prior 6 months, excluding the month in which they were interviewed. Therefore, the 2021 survey covers crimes experienced from July 1, 2020 to November 30, 2021, with March 15, 2021 as the middle of the reference period. Crimes are classified by the year of the survey and not by the year of the crime.
The NCVS is administered to persons age 12 or older from a nationally representative sample of U.S. households. It collects information on nonfatal personal crimes (rape or sexual assault, robbery, aggravated assault, simple assault, and personal larceny (purse snatching and pocket picking)) and household property crimes (burglary or trespassing, motor vehicle theft, and other types of theft).
[2] Webpage: “Elementary/Secondary Surveys & Programs.” U.S. Department of Education, National Center for Education Statistics. Accessed May 26, 2023 at <nces.ed.gov>
The ED School Climate Surveys (EDSCLS) are a suite of survey instruments for schools, districts, and states by the U.S. Department of Education’s (ED) National Center for Education Statistics (NCES). … The EDSCLS is conducting a National Benchmark Study collecting data from a nationally representative sample of schools across the United States. …
The School Survey on Crime and Safety (SSOCS) collects information on crime and safety from U.S. public school principals. SSOCS was administered in the spring of 2000 and again in the spring of 2004. SSOCS is a nationally representative, cross-sectional survey of 3,000 public elementary and secondary schools. Data are collected on such topics as frequency and types of crimes at school, frequency and types of disciplinary actions at school, perceptions of other disciplinary problems, and descriptions of school policies and programs concerning crime and safety.
[3] Webpage: “Labor Force Statistics from the Current Population Survey: Labor Force Characteristics.” Bureau of Labor Statistics. Last modified February 3, 2023. <www.bls.gov>
The Bureau of Labor Statistics (BLS) has two monthly surveys that measure employment levels and trends: the Current Population Survey (CPS), also known as the household survey, and the Current Employment Statistics (CES) survey, also known as the payroll or establishment survey.
Both surveys are needed for a complete picture of the labor market.
The payroll survey (CES) is designed to measure employment, hours, and earnings in the nonfarm sector, with industry and geographic detail. The survey is best known for providing a highly reliable gauge of monthly change in nonfarm payroll employment. A representative sample of businesses in the U.S. provides the data for the payroll survey.
The household survey (CPS) is designed to measure the labor force status of the civilian noninstitutional population with demographic detail. The national unemployment rate is the best-known statistic produced from the household survey. The survey also provides a measure of employed people, one that includes agricultural workers and the self-employed. A representative sample of U.S. households provides the information for the household survey.
[4] Presentation: “The Building Blocks of Gross Domestic Product.” By John Sperry (Survey Statistician, U.S. Census Bureau) and Marissa Crawford (Economist, U.S. Bureau of Economic Analysis). Census Bureau, March 30, 2016. <www2.census.gov>
Annual Survey of Manufactures …
Monthly and Annual Retail Trade Surveys …
Monthly and Annual Wholesale Trade Surveys …
Quarterly Services Survey ….
Annual Capital Expenditure Survey …
Housing Surveys conducted by Census for Housing and Urban Development …
Bureau of Labor Statistics employment occupational survey …
R&D Surveys conducted by Census for the National Science Foundation …
Government Finances series of annual surveys in non-Census years …
Annual surveys of pensions, employment …
Office of Travel and Tourism Industries survey of international air travelers
[5] Report: “American Community Survey Information Guide.” U.S. Census Bureau, October 2017. <www.census.gov>
Page 1:
This information guide provides an overview of the U.S. Census Bureau’s American Community Survey (ACS). The ACS is a nationwide survey that collects and produces information on social, economic, housing, and demographic characteristics about our nation’s population every year. This information provides an important tool for communities to use to see how they are changing. When people fill out the ACS form, they are helping to ensure that decisions about the future of their community can be made using the best data available. Decision-makers require a clear picture of their population so that scarce resources can be allocated efficiently and effectively.
Every year, the Census Bureau contacts over 3.5 million households across the country to participate in the ACS. To help those responding to the ACS, this information guide contains information on the survey aspects that affect the American public the most: ACS collection procedures, questions asked in the ACS, uses and importance of each question, and tools to access ACS estimates.
Page 2:
Each completed survey is important because it is a building block used to create statistics about communities in America. The information, collected from all over the United States by the ACS and throughout Puerto Rico by the Puerto Rico Community Survey (PRCS), serve as an impartial measuring stick that is used as the basis for decisions that affect nearly every aspect of our lives. People who receive the ACS have the responsibility of responding so that the statistical portrait of their community is as complete and accurate as possible. Every ACS survey is an opportunity for a respondent to help affect what their community receives.
Page 10:
ACS Subjects and Data Products
Population
Age
Ancestry
Citizenship Status
Commuting (Journey to Work) and Place of Work
Disability Status
Educational Attainment and School Enrollment
Employment Status
Fertility
Grandparents as Caregivers
Health Insurance Coverage
Hispanic or Latino Origin
Income and Earnings
Industry, Occupation, and Class of Worker
Language Spoken at Home
Marital History, Marital Status
Migration/Residence 1 Year Ago
Period of Military Service
Place of Birth
Poverty Status
Race
Relationship to Householder
Sex
Undergraduate Field of Degree
VA Service-Connected Disability Status
Veteran Status
Work Status Last Year
Year of Entry
Housing
Acreage and Agricultural Sales
Bedrooms
Computer and Internet Use
Food Stamps/Supplemental Nutrition Assistance Program (SNAP)
House Heating Fuel
Kitchen Facilities
Occupancy/Vacancy Status
Occupants Per Room
Plumbing Facilities
Rent
Rooms
Selected Monthly Owner Costs
Telephone Service Available
Tenure (Owner/Renter)
Units in Structure
Value of Home
Vehicles Available
Year Householder Moved Into Unit
Year Structure Built
[6] Report: “American Community Survey Information Guide.” U.S. Census Bureau, October 2017. <www.census.gov>
Page 1:
This information guide provides an overview of the U.S. Census Bureau’s American Community Survey (ACS). The ACS is a nationwide survey that collects and produces information on social, economic, housing, and demographic characteristics about our nation’s population every year. This information provides an important tool for communities to use to see how they are changing.
Pages 3–4:
Who Uses the ACS and Why?
Federal Agencies:
Throughout the federal government, agencies use ACS estimates to inform public policymakers, distribute funds, and assess programs. For example, the U.S. Department of Justice, the U.S. Department of Labor, and the U.S. Equal Employment Opportunity Commission use ACS estimates to enforce employment antidiscrimination laws. The U.S. Department of Veterans Affairs uses ACS estimates to evaluate the need for health care, education, and employment programs for those who have served in the military; and the U.S. Department of Education uses ACS estimates to develop adult education and literacy programs.
State and Local Agencies:
Information from the ACS is critical to state and local agencies. Planners and policymakers use the up-to-date estimates to evaluate the need for new roads, hospitals, schools, senior services, and other basic services. In addition, ACS data provide local communities with important information about their citizens, such as educational attainment, work commuting patterns, and languages spoken.
Nongovernmental Organizations:
ACS estimates are available to the public and are routinely used by researchers, nonprofit organizations, and community groups. These groups produce reports, research papers, business plans, case studies, datasets, and software packages. Some of these activities are designed to inform the public, some are designed to further business ventures, and some are used to apply for funding in the form of grants and donations for community projects.
Emergency Planners:
Emergency planners use ACS estimates to find local statistics critical to emergency planning, preparedness, and recovery efforts. When severe weather threatens or a natural disaster has occurred, ACS estimates provide important characteristics about the displaced population such as size, age, disability status, and the characteristics of housing that may be damaged or destroyed.
American Indians and Alaska Natives:
ACS estimates are available for tribal planners and administrators, as well as national organizations serving American Indians and Alaska Natives, to use in planning for future economic development, housing needs, and access to health and educational services. In combination with information from tribal administrative records, ACS estimates complete the portrait of the community and provide an enhanced view of a community’s current and future needs.
Businesses:
Businesses use ACS estimates to inform important strategic decisionmaking. ACS statistics can be used as a component of market research. They can provide information about concentrations of potential employees with a specific education or occupation, communities that could be good places to build offices or facilities, and information about people that might need their products or services. For example, someone scouting a new location for an assisted-living center might look for an area with a large proportion of seniors and a large proportion of people employed in nursing occupations.
Educators:
ACS estimates are available for educators to teach concepts and skills, such as statistical literacy, social studies, geography, and mathematics. Because the ACS is updated annually, it provides timely information for students every year.
Journalists:
Journalists use ACS estimates to highlight and investigate the issues that are important to each community. Articles frequently appear, across the country, on topics such as commuting and transportation, unemployment and earnings, education, and homeownership. Additionally, the wealth of ACS statistics allows journalists to paint a portrait of small communities as they respond to changes in population, employment, and housing needs.
Public:
People use ACS estimates to answer questions they have about their own community and other communities. If a person wants to see how they compare with their neighbors or find a new place to live, they can look to the ACS to provide a wealth of information. The ACS provides useful statistics about the median income of an area, the median age of the residents, the median house value, and monthly household expenses. The ACS is a good source of information on commute to work times and types of transportation used by the community. These statistics, and many more, are available to the public for communities across the United States.
[7] Book: Statistics for K–8 Educators. By Robert Rosenfeld. Routledge, 2013.
Page 93:
It is remarkable that the margin of error when you estimate a percentage depends only on p and n. This explains why a national random sample of 1,000 people can accurately represent 200 million people. The p from the sample of 1,000 people is not likely to be more than 3% off from what you would find if you did include all 200 million. The size of the population, 200 million, does not appear in the formula at all. In that same vein, whether your population is 20 thousand or 20 million people, if you want to be confident that your survey is accurate within 3% you will need to sample about 1,000 people. Most critically, this conclusion assumes that you have really picked the sample randomly from the population, and not systematically included or excluded any groups or individuals.
In general, larger random samples will produce smaller margins of error. However, in the real world of research where a study takes time and costs money, at a certain point you just can’t afford to increase the sample size. Your study will take too long or you may decide the increase in precision isn’t worth the expense. For instance, if you increase the sample size from 1,000 to 4,000 the margin of error will drop from about 3% to about 2%, but you might quadruple the cost of your survey.
[8] Textbook: Statistics: Concepts and Controversies (6th edition). By David S. Moore and William I. Notz. W. H. Freeman and Company, 2006.
Pages 42–43:
The variability of a statistic from a random sample does not depend on the size of the population as long as the population is at least 100 times larger than the sample. …
Why does the size of the population have little influence on the behavior of statistics from random samples? Imagine sampling harvested corn by thrusting a scoop into a lot of corn kernels. The scoop doesn’t know whether it is surrounded by a bag of corn or by an entire truckload. As long as the corn is well mixed (so that the scoop selects a random sample), the variability of the result depends only on the size of the scoop.
This is good news for national sample surveys like the Gallup Poll. A random sample of size 1,000 or 2,500 has small variability because the sample size is large.
[9] Book: Statistics for K–8 Educators. By Robert Rosenfeld. Routledge, 2013.
Page 93:
In that same vein, whether your population is 20 thousand or 20 million people, if you want to be confident that your survey is accurate within 3% you will need to sample about 1,000 people. Most critically, this conclusion assumes that you have really picked the sample randomly from the population, and not systematically included or excluded any groups or individuals.
[10] Textbook: Statistics: Concepts and Controversies (6th edition). By David S. Moore and William I. Notz. W. H. Freeman and Company, 2006.
Pages 42–43:
The variability of a statistic from a random sample does not depend on the size of the population as long as the population is at least 100 times larger than the sample. …
This is good news for national sample surveys like the Gallup Poll. A random sample of size 1,000 or 2,500 has small variability because the sample size is large. But remember that even a very large voluntary response sample or convenience sample is worthless because of bias. Taking a larger sample doesn’t fix bias.
[11] Book: Appeal to Popular Opinion. By Douglas N. Walton. Pennsylvania State University Press, 1999.
Page 258:
Scientific polls use sampling procedures where random samples are used, that is, where each individual in the group has an equal chance of being selected into the sample, or where some variation on this pattern is used to account for variations in the population that need to be reflected in the sample (Campbell 1974). Scientific popular-opinion polls also compute numerical margins of error indicating the extent to which results can be expected to vary. However, there is a problem here posed by the increasingly common use of unscientific polls, often called “self-selected” opinion polls, or “SLOP” polls (Gooderham 1994). This is the kind of poll the media loves to take to get a gauge of the public pulse, where readers, for example, are asked to tick off a “yes,” “no,” or “not sure” box in response to a number of questions published in a newspaper or a magazine, and the respondent is then requested to e-mail or fax responses back to the publication. These SLOP polls ignore the probability sampling requirements used in a scientific poll.
[12] Textbook: Mind on Statistics (4th edition). By Jessica M. Utts and Robert F. Heckard. Brooks/Cole Cengage Learning, 2012.
Pages 164–165:
Surveys that simply use those who respond voluntarily are sure to be biased in favor of those with strong opinions or with time on their hands. …
According to a poll taken among scientists and reported in the prestigious journal Science … scientists don’t have much faith in either the public or the media. … It isn’t until the end of the article that we learn who responded: “The study reported a 34% response rate among scientists, and the typical respondent was a white, male physical scientist over the age of 50 doing basic research.” … With only about a third of those contacted responding, it is inappropriate to generalize these findings and conclude that most scientists have so little faith in the public and the media.
[13] Search: “Examining the Scientific Consensus on Climate Change” site:politifact.com. Google, May 26, 2023. <www.google.com>
About 8 results (0.26 seconds)
Santorum: UN climate head debunked widely cited 97 …
politifact.com
Sep 2, 2015 — EOS, ‘Examining the Scientific Consensus on Climate Change,’ Jan. 20, 2009. Proceedings of the National Academy of Sciences of the United …
Do scientists disagree about global warming?
politifact.com
Aug 14, 2011 — Eos, “Examining the Scientific Consensus on Climate Change,” Jan. 20, 2009. Oregon Petition Wall Street Journal, “My Nobel Moment,” Nov.
Don Beyer says 97 percent of scientists believe humans …
politifact.com
Apr 4, 2016 — 3, 2004. EOS, “Examining the scientific consensus on climate change,” Jan. 20, 2009. Energy Policy, “Quantifying the consensus on anthropogenic …
Rick Perry says more and more scientists are questioning …
politifact.com
Aug 22, 2011 — Eos, “Examining the Scientific Consensus on Climate Change,” Jan. 20, 2009. Oregon Petition Wall Street Journal, “My Nobel Moment,” Nov.
Wayne Smith says science has not shown greenhouse …
politifact.com
May 2, 2013 — Draft article, “Examining the Scientific Consensus on Climate Change,” Peter T . Doran and Maggie Kendall Zimmerman, Earth and Environmental …
GOP candidate Kevin Coughlin says there’s not much …
politifact.com
Aug 31, 2011 — Eos, “Examining the Scientific Consensus on Climate Change,” Jan. 20, 2009. Global Warming Petition Project. Read About Our Process.
Santorum cites flawed climate change figure, and …
politifact.com
Sep 1, 2015 — EOS, ‘Examining the Scientific Consensus on Climate Change,’ Jan. 20, 2009. Climatic Change, ‘Climate change: a profile of US climate …
Global warming is a hoax, says Louisiana congressional …
politifact.com
Aug 1, 2014 — Eos, Transactions American Geophysical Union, “Examining the Scientific Consensus on Climate Change,” Jan. 20, 2009.
[14] Article: “Examining the Scientific Consensus on Climate Change.” By Peter T. Doran and Maggie Kendall Zimmerman. Eos, June 3, 2011. Pages 22–23. <agupubs.onlinelibrary.wiley.com>
Page 22: “The objective of our study presented here is to assess the scientific consensus on climate change through an unbiased survey of a large and broad group of Earth scientists.”
[15] Article: “Examining the Scientific Consensus on Climate Change.” By Peter T. Doran and Maggie Kendall Zimmerman. Eos, June 3, 2011. Pages 22–23. <agupubs.onlinelibrary.wiley.com>
Page 22: “With 3,146 individuals completing the survey, the participant response rate for the survey was 30.7%. This is a typical response rate for Web-based surveys [Cook et al., 2000; Kaplowitz et al., 2004].”
[16] Book: Designing and Conducting Survey Research: A Comprehensive Guide (4th edition). By Louis M. Rea and Richard A. Parker. Jossey-Bass, 2014.
Pages 195—196:
Nonresponse Bias
In statistical primary data gathering, where people are involved in answering questions, the response rate to the process will rarely be 100 percent. There are almost always some members of the initial sample who cannot be contacted, refuse to participate, or for some other reason fail to complete the data collection process.
Nonresponse bias is the departure of the sample statistics from their true population values owing to the absence of responses front some portion of the population that differs systematically from those portions of the population that did respond.
To the extent that responses are not received from 100 percent of the sample, questions arise about using the results from the ultimate sample to provide a reliable estimate of the true population. This concern results from the fact that if the sampled cases from which data are not received are different in some important way from those who responded, then the estimates from the sample can be potentially biased. A low cooperation or response rate can do more damage in rendering a survey’s results questionable than does a small sample, because there may be no valid way to scientifically infer the characteristics of the population represented by the nonrespondents.2
In order to make proper use of inferential statistical techniques (generalizing findings from a sample to the general population from which the sample was drawn) and have an acceptable degree of confidence in the representativeness of the data obtained, the sample itself must be random in both selection method and the actual distribution of participant characteristics. If those who do not respond differ in a significant way from those who do respond, then the sample becomes nonrandom even if it was randomly selected. A significant nonresponse bias can convert a randomly selected sample into one that is nonrandom in actual participation and respondent distribution. Even if a researcher chooses a truly random target sample, the final sample cannot be assumed to be representative (as randomness assumes) unless those who respond are random.
Random (probability) sampling entails letting chance determine the selection of members of the population for the sample. However, to the extent that randomly selected respondents do not participate in a manner consistent with that randomness, the randomness of those who ultimately respond grows more and more suspect and problematic, and possible nonresponse bias grows.
[17] Textbook: Mind on Statistics (4th edition). By Jessica M. Utts and Robert F. Heckard. Brooks/Cole Cengage Learning, 2012.
Pages 164–165:
Surveys that simply use those who respond voluntarily are sure to be biased in favor of those with strong opinions or with time on their hands. …
According to a poll taken among scientists and reported in the prestigious journal Science … scientists don’t have much faith in either the public or the media. … It isn’t until the end of the article that we learn who responded: “The study reported a 34% response rate among scientists, and the typical respondent was a white, male physical scientist over the age of 50 doing basic research.” … With only about a third of those contacted responding, it is inappropriate to generalize these findings and conclude that most scientists have so little faith in the public and the media.
[18] Book: Appeal to Popular Opinion. By Douglas N. Walton. Pennsylvania State University Press, 1999.
Page 258:
Scientific polls use sampling procedures where random samples are used, that is, where each individual in the group has an equal chance of being selected into the sample, or where some variation on this pattern is used to account for variations in the population that need to be reflected in the sample (Campbell 1974). Scientific popular-opinion polls also compute numerical margins of error indicating the extent to which results can be expected to vary. However, there is a problem here posed by the increasingly common use of unscientific polls, often called “self-selected” opinion polls, or “SLOP” polls (Gooderham 1994). This is the kind of poll the media loves to take to get a gauge of the public pulse, where readers, for example, are asked to tick off a “yes,” “no,” or “not sure” box in response to a number of questions published in a newspaper or a magazine, and the respondent is then requested to e-mail or fax responses back to the publication. These SLOP polls ignore the probability sampling requirements used in a scientific poll.
[19] Textbook: American Government and Politics Today: Essentials (2017—2018 [19th] edition). By Barbara A. Bardes, Mack C. Shelley, and Steffen W. Schmidt. Cengage Learning 2018.
Page 165:
Polling firms address this problem of obtaining a true random sample by weighting their samples. That is, they correct for differences between the sample and the public by adding extra “weight” to the responses of underrepresented groups. For example, 20 percent of the respondents in a survey might state that they are evangelical Christians. Based on other sources of information, however, the poll taker believes that the true share of evangelicals in the target population is 25 percent. Therefore, the responses of the evangelical respondents receive extra weight so that they make up 25 percent of the reported result.
[20] Article: “In Year of Record Midterm Turnout, Women Continued to Vote at Higher Rates Than Men.” By Hannah Hartig. Pew Research, May 3, 2019. <www.pewresearch.org>
“In 2018, women made up about the same share of the electorate as they did in the previous five midterms; 53% of voters were women and 47% were men.”
[21] Paper: “Nonvoluntary Sexual Activity Among Adolescents.” By Kristin Anderson Moore & others. Family Planning Perspectives, May–June 1989. Pages 110–114. <www.jstor.org>
Page 111: “The figures in this chart are based upon raw numbers. Since the survey had an overrepresentation of black people as compared to the general U.S. population, the figures in the previous footnote are weighted to compensate for this overrepresentation.”
[22] Article: “Why Americans Use Social Media.” By Aaron Smith. Pew Research, November 14, 2011. <www.pewresearch.org>
Weighting is generally used in survey analysis to compensate for sample designs and patterns of non-response that might bias results. A two-stage weighting procedure was used to weight this dual-frame sample. The first-stage weight is the product of two adjustments made to the data – a Probability of Selection Adjustment (PSA) and a Phone Use Adjustment (PUA). The PSA corrects for the fact that respondents in the landline sample have different probabilities of being sampled depending on how many adults live in the household. The PUA corrects for the overlapping landline and cellular sample frames.
The second stage of weighting balances sample demographics to population parameters. The sample is balanced by form to match national population parameters for sex, age, education, race, Hispanic origin, region (U.S. Census definitions), population density, and telephone usage. The White, non-Hispanic subgroup is also balanced on age, education and region. The basic weighting parameters came from a special analysis of the Census Bureau’s 2010 Annual Social and Economic Supplement (ASEC) that included all households in the continental United States. The population density parameter was derived from Census 2000 data. The cell phone usage parameter came from an analysis of the January-June 2010 National Health Interview Survey.
[23] Report: “The National Intimate Partner and Sexual Violence Survey: 2010 Summary Report.” U.S. Centers for Disease Control and Prevention, National Center for Injury Prevention and Control, November 2011. <www.cdc.gov>
Page 104:
To generate estimates representative of the U.S. adult population, weights reflecting sampling features, non-response, coverage, and sampling variability were developed for analyses. There are several main weight components contributing to the final sampling weights: selection, multiplicity, non-response, and post-stratification. The selection weight accounts for different sampling rates across states, the varying selection probabilities in the landline and in the cell phone frames, the within household probability of selection, and the subsampling of non-respondents in Phase Two of data collection. The multiplicity weight component takes into consideration that some sample members had both landline and cell phone services, thereby having multiple chances of entering the survey. The non-response weight accounts for the variation in response rates within the selected sample. Finally, the post-stratification weight adjusts the product of the selection, multiplicity, and non-response weights to match the population distribution on main demographic characteristics. This is accomplished using benchmark counts from census projections to correct for both coverage and non-response, which allows the landline and cell phone samples to be merged together.
Two main sets of weights were computed for the analysis of NISVS data. Applying the same principles in constructing the various weight components, one set of weights were computed for all partial and complete interviews, while another set of weights were computed for the complete interviews only. An interview is defined as “complete” if the respondent completed the screening, demographic, general health questions, and all questions on all five sets of violence victimization, as applicable. An interview is defined as “partial” if the respondent completed the screening, demographic, and general health questions and at least all questions on the first set of violence victimization (psychological aggression).
[24] “Report on the Economic Well-Being of U.S. Households in 2015.” Board of Governors of the Federal Reserve System, May 2016. <www.federalreserve.gov>
Page 70:
Although weights allow the sample population to match the U.S. population based on observable characteristics, similar to all survey methods, it remains possible that non-coverage or non-response results in differences between the sample population and the U.S. population that are not corrected using weights.
[25] Book: Designing and Conducting Survey Research: A Comprehensive Guide (4th edition). By Louis M. Rea and Richard A. Parker. Jossey-Bass, 2014.
Page 197:
Statistical adjustment, or weighting by observable variables, is one of the most common approaches used to address survey nonresponse. Weighting the data can help the researcher present results that are representative of the target population, where it can be assumed that no or little nonresponse bias exists in variables other than those being weighted.
[26] Textbook: American Government and Politics Today: Essentials (2017—2018 [19th] edition). By Barbara A. Bardes, Mack C. Shelley, and Steffen W. Schmidt. Cengage Learning 2018.
Page 165:
It is relatively easy to correct a sample for well-known demographic characteristics, such as education, gender, race, ethnicity, religion, and geography. It is much harder—and more risky—to adjust for political ideology, partisan preference, or the likelihood of voting. The formulas that firms use to weight their responses are typically trade secrets and are not disclosed to the public or the press.
[27] Paper: “How Well Does the American Community Survey Count Naturalized Citizens?” By Jennifer Van Hooka and James D. Bachmeierb. Demographic Research, July 2, 2013. <www.demographic-research.org>
Page 2: “In the United States, data on naturalization and citizenship largely come from Census Bureau surveys, such as the Current Population Survey (CPS), the long form of the decennial Census (2000 and earlier), and the American Community Survey (ACS).”
Page 3:
There are good reasons to suspect that citizenship is inaccurately estimated in Census data. During the late 1990s, Passel and Clark (1997) compared the number of persons that are reported as naturalized in the 1990 Census and the 1996 Current Population Survey (CPS) with the number of naturalized citizens based on administrative data from the Immigration and Naturalization Service (INS). They found the Census/CPS estimates to be much higher than the INS-based estimates for two groups. Among new arrivals (those in the U.S. fewer than five years) from all national origins, about 75% of those who were reported as naturalized were probably not. Among longer-resident Mexican and Central American immigrants, about one-third of those who were reported as naturalized were probably not.
Page 5:
To assess the current level of citizenship reporting error, we estimated the number of naturalized citizens in mid-year 2010 by age group, sex, region of origin, and duration of residence based on the number of Office of Immigration Statistics (OIS) naturalization records. We then compared the OIS-based estimates with the corresponding numbers in the 2010 American Community Survey (ACS) (also a mid-year estimate). The difference between the two provides an indication of over- or under-representation of naturalized citizenship in the ACS.
Page 17:
Table 2 reports the naturalization estimates by sex, region of birth, and duration of U.S. residence. For both men and women from all origin regions, the estimated number of naturalized citizens in the ACS is substantially and significantly higher than the OIS-based estimates among immigrants with fewer than five years in the U.S. For example, the number of naturalized Mexican men with fewer than five years of U.S. residence is nearly 27 times higher (2587%) in the ACS than the OIS estimates. Another way to express this is that among the 16 thousand reporting as citizens in the ACS, only about 600 (or about 4 percent) are likely to actually be naturalized citizens. Among those in the U.S. for five or more years, the OIS-ACS gap is much lower in relative terms, and concentrated among Mexican men.
Page 19:
In Table 3, OIS and ACS estimates are presented for Mexican and non-Mexican men and women by age group by varying rates of emigration. We note that the OIS estimates do not always decline as emigration increases from the “low” to the “moderate” to the “high” series because of age crossovers in various emigration estimates. Regardless of assumptions about emigration, ACS estimates are especially high relative to the OIS-based estimates among Mexican men of all age groups and Mexican women aged 40 and older. The same pattern does not hold among non-Mexicans, among whom the discrepancy remains relatively low across all age groups.
[28] Book: The Practice of Statistics (4th edition). By Daren S. Starnes, Daniel S. Yates, and David S. Moore. W. H. Freeman and Company, 2010.
Page 224:
The wording of questions is the most important influence on the answers given to a sample survey. Confusing or leading questions can introduce strong bias, and changes in wording can greatly affect a survey’s outcome. …
Don’t trust the results of a sample survey until you have read the exact questions asked. The amount of nonresponse and the date of the survey are also important. Good statistical design is part, but only a part, of a trustworthy survey.
[29] Article: “Abortion Poll: Not Clear-Cut; Wording of a Question Makes a Big Difference.” By E.J. Dionne, Jr. New York Times, August 18, 1980. Page A15. <www.nytimes.com>
What do the American people think about abortion? It depends on how you ask them the question. …
Do you think there should be an amendment to the Constitution prohibiting abortions, or shouldn’t there be such an amendment? … Yes [=] 29% … No [=] 67% …
Do you believe there should be an amendment to the constitution protecting the life of the unborn child, or shouldn’t there be such an amendment? Yes [=] 50% … No [=] 39% …
[30] Article: “Why Americans Use Social Media.” By Aaron Smith. Pew Research, November 14, 2011. <www.pewresearch.org>
“In addition to sampling error, question wording and practical difficulties in conducting telephone surveys may introduce some error or bias into the findings of opinion polls.”
[31] Textbook: Statistics: Concepts and Controversies (6th edition). By David S. Moore and William I. Notz. W. H. Freeman and Company, 2006.
Page 42: “It is usual to report the margin of error for 95% confidence. If a news report gives a margin of error but leaves out the confidence level, it’s pretty safe to assume 95% confidence.”
[32] Book: Statistics for K–8 Educators. By Robert Rosenfeld. Routledge, 2013.
Page 91:
Why 95%? Why not some other percentage? This value gives a level of confidence that has been found convenient and practical for summarizing survey results. There is nothing inherently special about it. If you are willing to change from 95% to some other level of confidence, and consequently change the chances that your poll results are off from the truth, you will therefore change the resulting margin of error. At present, 95% is just the level that is commonly used in a great variety of polls and research projects.
[33] Book: Statistics for K–8 Educators. By Robert Rosenfeld. Routledge, 2013.
Page 92:
In general, larger random samples will produce smaller margins of error. However, in the real world of research where a study takes time and costs money, at a certain point you just can’t afford to increase the sample size. Your study will take too long or you may decide the increase in precision isn’t worth the expense. For instance, if you increase the sample size from 1,000 to 4,000 the margin of error will drop from about 3% to about 2%, but you might quadruple the cost of your survey.
[34] Textbook: Mind on Statistics (4th edition). By Jessica M. Utts and Robert F. Heckard. Brooks/Cole Cengage Learning, 2012.
Page 390:
Why Polling Organizations Report a Conservative Margin of Error
The formula 1/√n provides a conservative approximation of the margin of error that usually overestimates the actual size of the 95% margin of error, and thus leads to an underestimate of how confident we can be that our interval estimate covers the truth. This conservative number is used by polling organizations so that they can report a single margin of error for all questions on a survey. The more precise formula involves the sample proportion, so with that formula the margin of error might change from one question to the next within the same survey. The formula 1/√n provides a margin of error that works (conservatively) for all questions based on the same sample size, even if the sample proportions differ from one question to the next.
The Gallup Poll Margin of Error for n = 1,000. The Gallup Organization often uses samples of about 1,000 randomly selected Americans. For n = 1,000, the conservative estimate of margin of error is √1000 = .032, or about 3.2%. If the proportion for which the margin of error is used is close to .5, then the reported (conservative) margin of error of .032 will be a good approximation to the real, more accurate margin of error. However, if the proportion is much smaller or larger than .5, then using .032 as the margin of error will produce an interval that is wider than necessary for the desired 95% confidence. For instance, if p is .1, then a more precise estimate of the margin of error is 2 × √(.1 × .0 / 1,000)= .019. You can see that using p = .032 instead of p = .019 would give you an interval that is much wider than needed.