Records from assignment 2

The assignment was:

I found all the responses below to be acceptable. There are obvious variations in quality and completeness, but I do not have what I consider to be any objective way to rank the quality. My comments on this page, in red, are intended only to provoke thought, and are not evaluations.


Kyle Braud
Tuesday, 15 Sep 1998

The Wall Street Journal published an article, "Wonder Years", in the March 98 Classroom Edition, which explained how teenagers in the nineties differ from teenagers in the past. A poll conducted by the Rand Youth Poll supplemented the article. The poll used actual and estimated data to provide statistics that show the total amount of money spent by teenagers in year. The poll did this for each since 1953 to 1997. The quantitative variable discussed is the amount of money spent by all teenagers in a year. Another quantitative variable discussed is the total teenage population for each year. These two variables are combined on a chart that indicates a relationship between total teenage spending and total population. As the population, rises teenage spending rises; however, this information is not the purpose of the statistic.

The poll reports data about teenage spending without informing the reader of the methods used to accumulate the data. The poll's methods are unclear, which leaves the reader questioning the accuracy of the data. Some information is revealed, but not much. The money spent by teenagers comes from parents and jobs. The money could have been spent on anything a teenager might buy, which could be a pack of gum or a new car. No other information is revealed. Consequently, many questions arise. How many teens were polled? Were teens polled or were parents of teens polled? Was the poll telephoned, voluntary, randomly sampled, and were the questions biased? What was the margin or error and how confident is the Rand Youth Poll in the statistics? This lack of information leads to an inaccurate statistic, because many important questions are left unanswered. The average amount of money spent by teenagers lacks significance not only because the poll lacks important information about its methods, but also because total teen spending is not important information. The poll may entertain an interested reader who enjoys meaningless statistics. The poll predicts that teenagers will set a new teenage spending record as we approach the year 2000, which may be true, even though the poll has not reveal it's method for the prediction.

You raise reasonable questions, especially relating to the vagueness of the term "teen spending" and the lack of information on how the data were acquired. The point that you (and some other students) make---that many polls are reported for their entertainment value---is valuable. Your tone seems rather harsh; I don't know if this was intentional.


Mary Walter Zachary
Tue, 22 Sep 1998

I found an article in American Demographics that had taken a poll of the percent of men and women who thought certain gestures were romantic in 1995. Red roses was the most romantic with 52% of men and 51% of women who thought sending or receiving red roses was romantic. The least romantic was sending or receiving candy with 21% of men and 21% of women. Taking special weekend trips, candle light dinners out, and love letters were in the middle. According to the poll men have become even less romantic than in the past and even fewer men think of themselves as romantic.

The poll is interesting to look at but it is impossible to attach any meaning to. The definition of romantic varies between different people. Also, a person's willingness to describe himself or herself as romantic may not actually reflect the way he or she behaves. The poll merely reveals the proportion of people willing to respond to the poll in a certain way.

Reasonable criticisms.


Martha Kathryn Hulgan
Wed, 23 Sep 1998

"LSU's Party-school ranking Slips"

This article is based on a survey of 56,000 college students by Princeton Review. It mentions different rankings for LSU concerning various aspects of college life. For instance, LSU is #1 for "students who almost never study" and #22 on the party school list. The article says the survey is unscientific and based on student opinion. About 150-200 students from 311 schools took part in the survey. It does not say how students were chosen to participate in the survey (whether or not it was random sampling). It is possible that some students who were asked did not wish to be a part of the survey. Maybe those who were in the survey have different opinions from those who, for one reason or another, did not wish to participate. Because the results were reported in a ranking, it is possible that there is a big difference between ranks, so it really does not tell us very much.

I would have liked to see some elaboration on the point you make about the disadvantage of rankings. There's a lot that one can say about this.


Brandon Pleasant
Thursday, 24 Sep 1998

In the wake of the President Clinton/Monica Lewinsky scandal, it is hard to open a newspaper without finding an article involving the latest poll on the public's view of the situation. On Tuesday, August 25, 1998, one such article can be found on page 7A of the USA Today. It involves a telephone poll conducted by USA Today, CNN, and Gallup, and it asks questions ranging from "Do you think the personal qualities of 'honest and trustworthy' apply to Clinton?" or "All things considered, are you glad Clinton is president?" In my poll, the question "Do you approve or disapprove of the way President Clinton is handling his job?" was asked of 1,317 adults by phone (Approve: 62%, Disapprove: 35%). Any astute math student will realize that this is only 97%, but the article explains that the totals may not equal 100% because not all the responses were shown. Also found in the fine print is the margin of error of plus or minus 3%. This means that Mr. Clinton's approval rating is between 59% and 65% (or his disapproval rating is between 32% and 38%). I feel that this article is accurate, although my opinion may be biased. Most of the results of this poll show President Clinton in a good light, which I agree with, so I will be prone to feel that the survey process was more accurate than, say, someone who voted for Bob Dole. Someone who voted for Mr. Dole might argue that the sample was not random enough, or maybe the sample size was not big enough to get a true feel for what the American people think. No matter how accurate one person might think a poll is, there will always be those unsatisfied with the method or the results.

What you say about the meaning of the margin of error is not correct. The fine print apparently does not mention at what level of confidence the 3% can be taken. One would assume, in a context like this, that it's probably 95%. The meaning is that 95% of all polls of the same size that obtain the same statistic come from populations with a parameter between 59% and 65%. Another thing with which I do not entirely agree is that polls are always open to objections. There are enough irrational people to provide objections to any proposition, no matter how it's justified, but it is possible to conduct a poll and draw conclusions in such a way that no rational person would argue with.


Evan Kilgore
Thursday, 24 Sep 1998

The question posed by this poll is "Should the federal government support basic scientific research, even if it brings no immediate benefit?" In the findings of this poll, 81% of people say that the government should support the research while 12% say no and 7% are undecided. I find several problems with this poll. The first issue to be addressed is the definition of immediate. Often, scientific experiments take several years, and even decades, to be concluded. Research done years ago is just now yielding results. Because of this long, drawn out process, "immediate" might have a different context in relation to scientific research. Another term that should be defined is "basic." What research would be deemed excessive or extreme and should not be supported by the government. Also, it is important to note that this poll just reflects opinions of California residents, a state known for radical political opinions that could differ greatly from the rest of the country. Finally, this poll is published by Research! America "an alliance for discoveries in health." This is a group that has obvious connection to the medical field and because of this is most like in favor of government support for research. It is possible, that because of this connection, the company could have manipulated the polling procedure in various ways in order to get a pro-research result. There is no mention of the polling methods used. There is no mention of the sample size either.

You note 3 important issues: the meaning of the question in the poll is vague, the source of the poll is not neutral and the polling procedure is not described.


Brett Landry
Thursday, 24 Sep 1998

This article was more of a survey that appeared on the CNN web site recently. The poll deals with public opinion concerning the bearing that Clinton's admission had upon his ability to function as a world leader. As it states below the poll, the sample group is by no means representative of the population at large. To participate in this poll alone requires a person to have Internet access, an interest in the news, and still be fascinated enough by the Clinton-Lewinsky matter to take the time to vote. These circumstances restrict the polling sample a great deal. One might venture to guess that the sample generally shows the opinion of people with at least a passing interest in politics and enough money to own their own PC or at least to have access to one. The fact that this poll is also self-selecting also may skew results since it is most likely that only people with a strong opinion on the matter would participate. As to whether or not these opinions would be reflected in the general public would require a totally different, more structured type of poll capable of reaching a sample reflective of the opinions of all Americans.

This is well-written, concise and complete. You avoid the strident tone of some of the other submissions, and by use of qualifying phrases such as "one might venture to guess," you achieve an appropriately objective and detached standpoint.


Brian Smith
Thursday, 24 Sep 1998

This article was based upon a poll conducted by the Atlanta Journal-Constitution and the University of North Carolina. All numerical information which is presented in this article takes the form of two quantitative variables, either a percentage or a number of people who do/do not believe in a certain item. The population that this poll questioned was made up of Southerners/non- Southerners and was conducted to determine their religious beliefs and how those beliefs related to their ideas towards more skeptical theories (UFO's, ESP, possession by the devil). The variables given can be determined by the following: Let C stand for the variable (in percentage points). To compute its value, determine how many people were questioned and how many of those people answered positively to a certain topic. Then, C = 100 x (number of people who answer positively to a question)/(number of people questioned).

While an interesting article, it lacks statistical validity. First of all, there is no information given on just what exactly determines if a person is a Southerner/non-Southerner. Thus the parameters given fall into question, for it is uncertain what makes up the population they (the parameters) are speaking for. Secondly, the poll's precision is to be questioned as no information is given on a sampling frame or any population characteristics. Finally, bias could have come into play since the Atlanta Journal-Constitution is a major newspaper of the regional south and a majority of the questions asked were those that leaned more towards "southern" topics.

I don't understand your last comment.


Laura Rosche
Thursday, 24 Sep 1998

"U.S. Workers long for good old days"
Infoseek news channel, September 1, 1998

SUMMARY: Shell Oil Company conducted a poll where they asked questions to Americans about their work environment.

The poll does not clearly state what questions the poll asked, nor does it give the confidence level. It does, however, give the name of the research firm, the date, the margin of error, and the number of randomly selected adults. But what good are these facts without knowing the purpose of the poll and the types of questions used to fulfill the purpose. According to the findings of this first "Shell Poll", 72% of the workers said they preferred, "the security of staying with one employer for a long time and moving up the ladder." It doesn't say if this was a response to a multiple choice question or a compilation of answers, but I don't think 72% of the people would have come up with this spontaneously. 25% of the people said they were "very to fairly loyal" to their employers. Job security was apparently the main issue. But if all the questions were centered around job security, then that is what the subjects would have talked about. Who wouldn't prefer job security to the alternative of not working at all? What was the alternative given to the subjects? If the adults were in fact randomly selected, it is possible that the people surveyed had nothing to do with a work environment such as this. Did they just pick Shell Co. employees? Were business owners included? The population that the sample was selected from is not stated. Overall, this article was not fully documented, and seemingly had no purpose. The results seemed typical of what I think of as the American working class---I didn't need an article to spell it out.

When I read the article, I also was struck by how completely it failed to convey any sense of what the data was like, offering interpretations of data without supporting them with meaningful descriptions of the data. Because of this, mentioning the margin of error in the poll seemed ridiculous. Your negative reactions are justified.


Kyle Braud
Friday, 25 Sep 1998

A poll conducted by the National Constitution Center resulted in statistics that were used to measure a parameter of the entire American teenage population. The poll's methods were unclear. The telephone survey reported several statistics that point out that American teenagers do not know enough about the Constitution of the United States. The National Constitution Center asked 600 teenagers, between the ages of 13 and 17, several straightforward and unbiased questions about information dealing with American government, especially the Constitution. The survey also asked the same teenagers questions about pop culture, in order to compare the teenagers' knowledge of pop culture versus the Constitution. One statistic taken from the sample shows that only 41% of polled teenagers can name the branches of government, but 59% can name the Three Stooges.

The National Constitution Center used their sample of 600 teenagers to judge our entire nation of teenagers. They claim to only have a +-4 % margin of error, which may be correct, but from the information revealed it seems unlikely. The National Constitution Center's sampling methods lack important information that could lead to variations in the statistic depending upon the exact sampling method used. As a result of this lack of information, many questions arise. We do not know how each of the 600 units were chose in the sample. Was it a random sample? Did the surveyors open a phone book and begin calling people? We do not know. The statistics are probably very accurate if a random sample was used, but the poll did not reveal this information. The poll claims to have a margin of error of +-4%, which leads to the belief that they must be 95% confident in their statistics. How could a telephone poll that did have a random sample come up with a +-4% margin of error? Telephone polls leave out teenagers without phones and some people may lie about their age. In addition, new problems arise if the poll was not sampled at random. Were most of the teenagers 13 or 17? A 12th grader would know more about government than an 8th grader. Were they all selected from the same town (maybe even the same school) or from varied areas around the country in many towns? If all of the teenagers were from one town or city (especially the same school) the statistic would be biased. The poll should have also used a larger sample to gain a more accurate statistic, since they claimed to have only a +-4% margin of error.

The statistics may not be exactly right, but the fact that American teenagers do not know enough about the Constitution is probably correct. The point of the message is clear, even though some of the poll's methods were unclear. American teenagers do not know enough about the Constitution, which is a problem, since the future of America lies in our teenagers' hands. Although the point that the statistics make is probably true, the poll would have been much more effective if it would have revealed its sampling and selection method.

You are raising several issues, but they seem jumbled together. Statisticians make a sharp conceptual distinction between those errors that arise as a consequence of randomness and those errors that are caused by real-world influences, whether known or unknown. You see this kind of distinction in the blurb that the New York Times often publishes along with major surveys---we read one of these blurbs in class---where the notion of margin of error is explained for general readers. Margin of error is not only relative to a level of confidence, but both margin of error and level of confidence together are relative to the assumption that unknown influences are negligible.

 

What you seem to concerned with what one might call the "absolute reliability" of the results. And this is reasonable. But the point is, you can reason more effectively about absolute reliability by making the conceptual distinctions I referred to. How much error might have arisen by pure chance, if we assume that other confounding influences are absent? Other than this, what real-world factors might throw the results off? If you were in a decision-making position, and there were risks and payoffs, you'd want to consider both. It would help, though, to be able to divide the thinking.


Landon McGrew
Friday, 25 Sep 1998

Clinton' s Approval Ratings Drop

A poll taken on Tuesday shoved that President Bill Clinton's job approval rating has dropped from around 65% to 56%. Clinton's personal approval ratings also dropped from 45% to 26%. Of the 1,000 people polled 62% disapprove of Clinton's personal life. The poll was taken by collaboration between a Republican strategist and Democratic pollsters. A total of 1,000 people participated in the poll. This number is far too few to represent the entire United States. For this poll to be accurate more people should have been polled. This article also does not tell the reader who participated in the poll. Were the participants upper class, white citizens or lower class minorities? The statistics could be drastically different depending on this fact. One point that the interpretation and confidence in the poll is the fact that it was taken as a collaboration between Republicans and Democrats. This makes it less likely that the participants were chosen for the purpose of proving a private agenda. Each side keeps the other side honest in who is chosen to be polled.

Why do you think 1000 is "far too few to represent the entire United States?" I don't agree at all---but then again, I have several different takes on "represent." In some senses, 1000 is more than enough. In others, far too few. So explain the way you understand "represent" to show why you think it's too few.


Michael David Smith
Tuesday, 13 Oct 1998

POLITICS SOUTH OF THE BORDER: Polling Article Analysis

In an Aug. 8 article from the Cable News Network (CNN) web site, the Aug. 2 Mexican gubernatorial elections are discussed along with exit polls taken by two well-respected Mexican media outlets. The real election race was between two front-runners--a candidate from the Partido Revolucionario Institucional (PRI) and the Partido Accion Nacional (PAN) parties. The controversy in the election lies with the PRI, which has been notorious for fixing elections in their favor, thus ruling Mexico's government for over seventy years. The article focuses on three key races--elections in the provinces of Aguascalientes, Oaxaca and Veracruz. Televisa and Television Azteca, the two networks that ran the exit polls, came up with conflicting results. In the Oaxaca race, both networks arrived at different margins of victory for the same candidate. Azteca concluded the PRI candidate would win by a narrow margin of six percentage points, while Televisa found the candidate would win by a sizable lead of seventeen points. The drastic difference between the polls demonstrates how inaccurate polling can sometimes be. Perhaps in these polls the samples were not entirely random, and it is a good chance one of the parties may have tampered with the results to influence voter behavior. PRI has historically had corrupt ties with Mexican television networks, and those networks usually act as a voice for the party's cause. Because of this, the questions could have been asked exclusively to PRI supporters, or perhaps fabricated. The motive behind doctoring polls is most likely to affect the outcome of an election. If voters see news coverage that says a certain candidate is more than likely to win, they will probably either vote for that candidate or not vote at all.

Of all the articles gathered, this one seemed most provocative. How the exit polls in the same race could have been SO different is a real mystery. Without knowing how large the samples were, we have no way of knowing the likelihood of the difference occurring by chance. But one would expect a responsible TV network to choose a sample at least large enough to result in a margin of error of 4 or 5 percentage points---i.e., a few hundred. The difference in the two polls is so large that it would appear very unlikely to have arisen by chance from two responsibly conducted exit polls. Could there have been other influences? Yes, one station might have run the poll in a neighborhood that leaned strongly to one party. But again, one would expect a responsible pollster to try to avoid this. I am led to suspect some irresponsibility on the part of one of the stations. I don't have any reason to suspect complicity with one of the parties, or intentional distortions, but I cannot rule this out either. But all I can do is raise questions; to find out what really happened we need information we just don't have.


Kat Taylor
Thursday, 24 Sep 1998

"Most Regard Microsoft Favorably"

New York Times and CBS News conducted a random telephone poll concerning Americans' views of Microsoft. They interviewed 1,126 adults whose numbers were found by adding random digits to form complete telephone numbers to those numbers on a list of 42,000 active residential exchanges drawn randomly by computer. This was in order to reach unlisted numbers. There is a margin of error of 3 percent either way, but how the margin was calculated was not explained. The data on the pie charts (next page) show percentages that do not equal 100. This was due to rounding and those who answered "No opinion." Also, the margin is greater for "subgroups." The article attributes possible errors to difference of wording in the questions and varied order of questions... a good disclaimer. Nevertheless, the results are believable. One thing I found interesting, however, is that 54 percent said Microsoft uses legal tactics and 39 percent said they don't think it would make a difference if it monopolized the Internet browser business, but when asked whether government investigations should continue, 60 percent said yes. So, while they say they believe a monopoly wouldn't make a difference, there still seems to be some underlying apprehension as to Microsoft's future and how it will affect their own.

Good discussion of what seems to have been a well-conducted poll.


That's all, folks.