September 29, 2016

MysteryPollster Is Back! (Sorta)

And we're back! ...but not really.  

It's been a little over ten years since I forwarded traffic from the MysteryPollster url to Pollster.com and over six years since we redirected again to HuffPost Pollster. In the last year, since I joined SurveyMonkey, I neglected to turn off the forwarding.**

However, since the content created here a dozen years ago still has some relevance (thank you, Nate), I decided to "re-open" the main page of this dusty old blog, cobwebs and all. You'll probably find what you're looking for in the FAQ links on the right, although the full archives are accessible below

Alas, my day job at SurveyMonkey leaves no time to update this site, but you can still follow "MysteryPollster" (aka me) on Twitter and read the latest analysis from me and my SurveyMonkey colleagues on our Election Tracking Blog

**Truth is, the underlying pages were there all along and accessible via Google search. But I made them too hard to find.

 

Posted by Mark Blumenthal on September 29, 2016 at 11:58 AM | Permalink | Comments (0)

August 29, 2006

The NYT Reader's Guide to Polls

Jack Rosenthal, a former senior editor of the New York Times, filled in as the guest Public Editor" this past Sunday and devoted the column to a remarkable "Reader's Guide to Polls." The column (which also includes a kind reference to MP's coverage of the AMA's online Spring Break study) provides a helpful sampler of the various sources of imprecision in public opinion polls.  It is a worthy general primer, but as with any attempt to condense a complex subject into a very small space, a few items he covered would probably benefit from a more complete explanation. 

One example involves the reference by Andrew Kohut of the Pew Research Center to some findings from their polls on gay marriage:

The order of questions is another source of potential error. That's illustrated by questions asked by the Pew Research Center. Andrew Kohut, its president, says: "If you first ask people what they think about gay marriage, they are opposed. They vent. And if you then ask what they think about civil unions, a majority support that."

Those intrigued by that particular finding should definitely download the Pew report from November 2003 that documented the experimental findings.  Here's the key passage:

Granting some legal rights to gay couples is somewhat more acceptable than gay marriage, though most Americans (51%) oppose that idea. Public views on giving legal rights to gay and lesbian couples depend a good deal on the context in which the question is asked.  On the survey, half of respondents were asked their views on civil unions after being asked about gay marriage, and half were asked the questions in the reverse order.  When respondents have already had the opportunity to express their opposition to gay marriage on the survey, more feel comfortable with allowing some legal rights as an alternative.  But when respondents are asked about legal rights without this context, they draw a firmer line.

Pew_civil_union_19722


This context difference has little effect on core support and opposition to gay marriage itself, which is opposed by nearly two-to-one regardless of how the questions are sequenced.  But opponents of gay marriage are much more willing to accept the idea of some legal rights after they have had the opportunity to express their opposition to gay marriage. The percent favoring legal rights rises to 45% in this context, while just 37% favor the idea alone.  Put in other words, opponents of gay marriage are much more likely to accept allowing some legal rights when they have already had the opportunity to express their opposition to gay marriage itself.

Note also that the Pew surveys have shown a modest increase in support for both gay marriage and civil unions in surveys conducted since 2003.

Another topic, brought to my attention by a very alert reader, concerned this passage from the Rosenthal piece: 

Respondents also want to appear to be good citizens. When the Times/CBS News Poll asks voters if they voted in the 2004 presidential election, 73 percent say yes. Shortly after the election, however, the Census Bureau reported that only 64 percent of the eligible voters actually voted.

Ironically, as the reader points out, the Census Bureau's estimate of turnout is itself based on a survey, in this case the Current Population Study, which is prone to the same sort of measurement.  Professor Michael McDonald of George Mason University produces his own turnout statistics based on aggregate population and vote statistics.  His estimate of turnout among eligible voters in 2004 was closer to 61%, not 64%.  The Census Bureau explains the imprecision of this particular estimate in a footnote (#2):   

The estimates in this report (which may be shown in text, figures, and tables) are based on responses from a sample of the population and may differ from actual values because of sampling variability or other factors.

The "other factor" in this case is the same phenomenon that results in the over-reporting of voting in the NYT/CBS polling -- respondents wanting to be good citizens.  I suppose this example teaches that vetting survey results for publication is not as easy as we might imagine. 

Speaking of which, the Rosenthal piece also included this news:

The Times recently issued a seven-page paper on polling standards for editors and reporters. "Keeping poorly done survey research out of the paper is just as important as getting good survey research into the paper," the document said.

True enough.  But what standards will the Times now apply?  If that seven page document has been released into the public domain, I haven't seen it.  ABC News puts its Methodology and Standards guide online.  Why not America's "newspaper of record?"

Posted by Mark Blumenthal on August 29, 2006 at 05:03 PM in Divergent Polls, Measurement Issues | Permalink | Comments (4)

August 22, 2006

Vacation

And finally . . .

As readers may have guessed from the lack of posts the last few days, I am trying to take a much needed vacation break this week.  I interrupted it today to do finish up a few items I did not quite get to last week or -- in the case of the push polling post -- that seemed important enough to justify the interruption.  My family probably disagrees.

And speaking of family, I am going to try hard to stay away from the blog for the rest of the week.  I'll be back on Monday, and you will definitely want to stay tuned next week.  Things are about to get a lot more interesting around here.  Sorry to be a tease (again), but stay tuned . . .

Posted by Mark Blumenthal on August 22, 2006 at 01:00 PM in MP Housekeeping | Permalink | Comments (1)

Crosstabs.org

Last week I discovered an interesting new blog devoted to political polling called Crosstabs.org.  Actually, Crosstabs.org is something of a blog within a blog, a site nestled within the conservative site RedState.comorg.  It combines frequent posts from blogger Gerry Daly -- who used to blog at his own site, Dalythoughts, and comment from time to time here on MP -- with an interesting twist.  The new site will include occasional contributions from five Republican campaign pollsters:  Robert Moran of Strategy One, Bob Ward of Fabrizio, McLaughlin & Associates, Brent McGoldrick of Grassroots Targeting, Bill Cullo of Qorvis Communications and Rob Autry of Public Opinion Strategies.

If imitation is the sincerest form of flattery, then MP certainly welcomes the presence of more professional pollsters into the blogosphere, regardless of their political perusasion.  When it comes to methodology, we all do things a bit differently, and readers will benefit from having more perspectives online.  Take the issue of the "incumbent rule," for example.  In their first week, the pollsters at Crosstabs.org have posted some thoughts worth reviewing here, here and here.

Now, obviously, Crosstabs.org will handicap polls from a conservative perspective (just as Dalythoughts did during the 2004 cycle). Chris Bowers and his colleagues at MyDD and Ruy Teixeira at Donkey Rising have long done the same from the liberal side of the blogosphere.  And while I try to keep the handicapping and commentary as neutral here as I can, there is no doubt that I am a Democratic campaign pollster.  So a little balance is not a bad thing. 

Welcome to the neighborhood Crosstabs.org!

Posted by Mark Blumenthal on August 22, 2006 at 12:57 PM in Incumbent Rule, Polling & the Blogosphere, Pollsters | Permalink | Comments (4)

So What *Is* A Push Poll?

Over the weekend, Greg Sargent of TPMCafe reported on what he considers "push polling, no question," involving some calls that trash two Democratic candidates for Congress, Kirsten Gillibrand in New York's 20th District and, more recently, Patty Weiss in Arizona's 8th District.

With all due respect to Sargent and his source, Mickey Carroll of the Quinnipiac University Polling Institute, both are using the wrong definition of "push polling." It is certainly more than poll questions that feed "the negative stuff," as Carroll puts it.  A true push poll is not a poll at all.  It is a telemarketing smear masquerading as a poll.

Back in February, in commenting on a different set of calls made, ironically, into the very same New York 20th District, I described real push polling in detail:

Many organizations have posted definitions (AAPOR, NCPP, CMOR, CBS News, Campaigns and Elections, Wikipedia), but the important thing to remember is that a "push poll" is not a poll at all. It's a fraud, an attempt to disseminate information under the guise of a legitimate survey. The proof is in the intent of the person doing it.

To understand what I mean, imagine for a moment that you are an ethically challenged political operative ready to play the hardest of hardball. Perhaps you want to spread an untruth about an opponent or "rumor" so salacious or farfetched that you dare not spread it yourself (such as the classic lie about John McCain's supposed "illegitimate black child"). Or perhaps your opponent has taken a "moderate" position consistent with that of your boss, but likely to inflame the opponent's base (such as Republican voting to raise taxes or a Democrat supporting "Bush's wiretapping program").

You want to spread the rumor or exploit the issue without leaving fingerprints. So you hire a telemarketer to make phone calls that pretend to be a political poll. You "ask" only a question or two aimed at spreading the rumor (example: "would you be more or less likely to support John McCain if you knew he had fathered an illegitimate child who was black?"). You want to make as many calls as quickly as possible, so you do not bother with the time consuming tasks performed by most real pollsters, such as asking a lot of questions or asking to speak to a specific or random individual within the household.

Again, the proof is in the intent: If the sponsor intends to communicate a message to as many voters as possible rather than measure opinions or test messages among a sample of voters, it qualifies as a "push poll."

We can usually identify a true push poll by a few characteristics that serve as evidence of that intent. "Push pollsters" (and MP hates that term) aim to reach as many voters as possible, so they typically make tens or even hundreds of thousands of calls. Real surveys usually attempt to interview only a few hundred or perhaps a few thousand respondents (though not always). Push polls typically ask just a question or two, while real surveys are almost always much longer and typically conclude with demographic questions about the respondent (such as age, race, education, income). The information presented in a true push poll is usually false or highly distorted, but not always. A call made for the purposes of disseminating information under the guise of survey is still a fraud - and thus still a "push poll" - even if the facts of the "questions" are technically true or defensible.

So it is not just about questions that "push" respondents one way or another, not just about being negative, not even about lying (although lying on a poll is certainly an ethical transgression).  It is about something that is not really survey at all.

The calls that the Albany Times Union reported do not fit the definition of push polling.  First, the calls involved more than just a question or two.  They included a series of "fairly innocuous questions," such as "whether the country is headed in the right direction," Bush's job rating and the initial congressional vote.  Second -- and this is a big clue -- one respondent reports that he hung up in anger one night, "only to have a different person call back the next night asking him to finish answering the questions (he did)."  That sort of "call back" is something a real pollster would do but a "push pollster" would never bother with.  Third, the Times Union's reporting plausibly traces the calls to the Tarrance Group, a polling firm that has long conducted legitimate internal polling for Republican campaigns. 

I am in no position to evaluate the substance of the attacks reportedly made in the calls in NY-20 or AZ-08, and I will certainly not try to defend them.  The attacks tested in those surveys may well have been untrue, distorted or unfair.  If so, they deserve the same sort of condemnation would we give if delivered in a television or radio ad or in an attack made in a debate.  If the attacker is lying, it is unethical regardless of the mode.  A television advertisement should not lie and neither should a pollster.  But a lie alone does not a "push poll" make.

Is this just a semantic distinction?  I don't think so.  Just about every campaign pollster, Democrat and Republican, uses surveys to test negatives messages.  If you think negative ads by Democrats, including these examples, were produced without benefit of survey based message testing, you're dreaming.  If we choose to define a "push poll" as a survey that merely tests "the negative stuff," then we better be ready to accuse just about every competitive campaign of the same "dirty tricks."

If a pollster lies in a real survey, that's sleazy and wrong.  If candidates distort the truth, let's call them on it.  But if we confuse negative campaigning -- or the survey research that supports it -- with the dirty tricks of true "push polling" then we too are distorting the truth.

Posted by Mark Blumenthal on August 22, 2006 at 12:21 PM in Push "Polls" | Permalink | Comments (5)

Another Online Poll on Online Activity

Last week's "Numbers Guy" column by Carl Bialik of the Wall Street Journal Online looked at another online survey about an online activity, in this case about online alcohol sales to teenagers.  Bialik noticed something in the release that the other outlets should have at least checked.  The "online" sample was not drawn from all teenagers, but rather from teenagers who had volunteered to participate in online surveys.  It was a non-random online survey of an online activity, something with the potential to create a significant bias. 

I will let Bialik take it from there: 

People who agree to participate in online surveys are, by definition, Internet users, something that not all teens are. (Also, people who actually take the time to complete such surveys may be more likely to be active, or heavy, Internet users.) It's safe to say that kids who use the Internet regularly are more likely to shop online than those who don't. Teenage Research Unlimited told me it weighted the survey results to adjust for age, sex, ethnicity and geography of respondents, but had no way to adjust for degree of Internet usage.

Regardless, the survey found that, after weighting, just 2.1% of the 1,001 respondents bought alcohol online -- compared with 56% who had consumed alcohol. Making the questionable assumption that their sample was representative of all Americans aged 14 to 20 with access to the Internet -- and not just those with the time and inclination to participate in online surveys -- the researchers concluded that 551,000 were buying alcohol online.

Bialik goes on to raise even more fundamental problems with the release from the survey sponsor -- the Wine and Spirits Wholesalers of America -- a group that according to Bialik, "has long fought efforts to expand online sales of alcohol."  For one, their headline claims that "Millions of Kids Buy Internet Alcohol," while even the questionable survey estimate adds up to just 551,000. 

One point not raised in Bialik's excellent piece is that the survey reports a margin of error ("plus or minus three percentage points").  The margin of error is a measure of random sampling error, which applies only where there is a random sample.  This survey was based on a sample drawn from a volunteer panel, not a random sample survey.  I have raised this issue before (here and here).   Yes, random sample surveys face challenges of their own, but if a sound statistical basis exists for reporting a "margin of error" for non-probability samples, I am not yet aware of it. 

Posted by Mark Blumenthal on August 22, 2006 at 11:45 AM in Internet Polls, Sampling Error | Permalink | Comments (1)

August 21, 2006

CT: Party ID from Quinnipiac Poll

In the busy run-up to a long needed vacation, I have not had a chance to write about the new Quinnipiac Connecticut poll released last week.  However, I did want to note (and promote) the very helpful comment from reader Alan that provides some very helpful information for those scrutinizing the results of that survey by party ID (which seems to be just about everyone):

In Quinnipiac's August 17, 2006 poll in Connecticut, the results are reported by party affiliation, but don't include the % of total respondents in each party, making it difficult to tell how much the Democrat, Republican and Independent voters weigh in the totals.

In case anyone else is looking for this, I asked Quinnipiac if they could provide this information, and this is what they gave me:

-------

Generally speaking, do you consider yourself a Republican, a Democrat an Independent, or what?

LIKELY VOTERS

Republican 25%
Democrat 36
Independent 34
Other 4
DK/NA 1

Thanks Alan!

Posted by Mark Blumenthal on August 21, 2006 at 04:37 PM in Polls in the News | Permalink | Comments (0)

August 16, 2006

More on the CT Exit Poll Experiment

Today I want to catch up and fill in a few details provided by our intrepid reader/reporter Melanie about that experimental CBS/New York Times exit poll conducted last week in Connecticut.   As regular readers may recall, Melanie first brought the exit poll to our attention after being interviewed last Tuesday.   I asked her what she remembered about the experience, starting with the interviewer.  Here is her report: 

The poll person was a young guy, maybe 20 - looked like a college student (tall, a little messy, scraggly hair, very diffident). I think he had some sort of ID around his neck or clipped to his pocket, but I didn't notice what it said  (he was in the midst of a discussion with the woman running the voting site about where he could be set up - she said she had just gotten a faxed letter from someone giving this young guy permission to be there, wherever he wanted to be as long as he didn't interrupt anyone, so she told him he could stay out of the sun - he was about 5 feet from the door of the site).

Melanie's experience gives us a unique window into the real world challenge of trying to select voters randomly as they exit a polling place.  She happened to overhear the interviewer's most important interaction of the day, the one that enabled him to stand just outside the door of the polling place.  Had he been forced to stand farther away, his ability to sample exiting voters randomly would have been severely compromised.  The post-election report provided by the two companies that conducted the 2004 exit polls (Edison Research and Mitofsky International**) showed that errors in John Kerry's favor were more than twice as large (-12.3) when interviewers were forced to stand 100 feet or more from the door of the polling place than when they could stand right outside the door (-5.3, p. 37).

At first I assumed this conversation occurred first thing in the morning, just as the polling place opened.  But I asked her to clarify and she said:

I got there about 9, which is early for me, but not for them (they had opened at 6 - the folks inside told me about 10% of the possible voters had already been in by then).

So put it all together and notice the reference to a faxed permission letter that the polling place official "had just gotten."  From the description, it appears that when the interviewer first arrived, the polling place officials would not let him stand near the exit door.  So he presumably called his supervisor and they found a way to fax a letter to the polling place official, and just as Melanie was voting, they relented and allowed the interviewer to stand near the door.  Up until that time, the interviewer had to try to intercept 10% of the day's sample from a less advantageous position. 

Second, notice the clear impression that the interviewer's appearance made on Melanie.  He "looked like a college student" with "messy scraggly hair" and a "diffident" attitude, yet she did not notice what appeared on his ID.  Now dear reader, ask yourself what you might guess about the politics or personality of that interviewer.  How would you react to an approach from such a person?  Is it possible that your choice -- whether you make eye contact, approach with interest or walk briskly in the opposite direction -- might have some relationship to your politics?  The data in the Edison/Mitofsky report from 2004 strongly suggests that it does

I will let Melanie continue with her description of how she happened to be interviewed: 

He was alone -- he actually didn't approach me -- I saw a couple of other folks doing his poll and when I walked over to see if he'd ask me to do it too he just handed me the device.  (My voting site is pretty small -- just one voting machine -- with about 700 registered democrats -- in a part of a smallish suburb east of New Haven -- we never get exit polls here). 

An exit poll interviewer is usually instructed to select exiting voters at random using a predetermined selection rate.  In other words, they are given a number (usually between 1 and 10) and told to select every third voter or every fifth.  Some are told to select every voter that exits the polling place.  We do not know the interval used here, although with 700 voters likely a number greater than one -- that is, not every voter should have been selected.  We will also never know if Melanie was one of the voters who should have been selected, but it is awfully interesting that in this instance she approached the interviewer rather than the other way around.

The data from the Edison/Mitofsky report on the 2004 exit polls showed that errors in Kerry's favor were nearly three times greater when where interviewers had to approach every tenth voter (-10.5), than in smaller precincts where interviewers were instructed to interview every voter or every other voter (-3.6, p. 36).   The clear implication is that where there was more potential for the random selection procedure to break down (either because of more chaos at the voting place or just more room for error), Kerry voters were more likely than Bush voters to volunteer to participate. 

So here we have another anecdote that helps convey the most important challenges in conducting an exit poll.  Some argued the collection of "almost perfect" random samples outside a polling place is easy.  It is not. 

Now let's hear about the most novel aspect of the CBS/New York Times Connecticut experiment, as per Melanie's report, that they used an "electronic device" to conduct the interviews.  Exit polls have traditionally been conducted with a paper "secret ballot" given to respondents on clipboard so they can fill it out and drop it into a "ballot box" without revealing the answers to the interviewer.  In Connecticut, CBS and the Times experimented with something new: 

The device was sort of like what the UPS guys use, except horizontal -- about 11" wide and 7" high. There was a LCD screen making up the middle of it, and the questions were in the center of the screen -- you just tapped the answers with a pencil to register your choices. He didn't get involved at all -- just handed it to me and took it back when I was done.  He seemed to have 2 or 3 of them.  He also had a little thing that looked sort of like a Palm pilot or blackberry that (I think) he inserted into the back of the poll device after it was filled out -- I assumed that was the way each set of info was transmitted to the collection folks, but I could be completely wrong about that.

The downside of the paper ballot is that it forces the interviewer to stop intercepting voters several times during the day to talley up responses and call in respondent level data, reading off each answer to someone on the other end of the line that enters everything into a computer.  The NEP exit polls (and those done by their forerunner, VNS) typically had interviewers call in only half of the respondent level data in order to cut down on phone call bottleneck.  This new technology automates both the tabulation and transmission so interviewers can cover the polls constantly all day, and 100% of their collected data gets transmitted immediately. 

How hard was it to use the electronic device?  Melanie continues:

The last question on the poll was about how easy or difficult the text on the screen was to read, so clearly they are experimenting with that part of it too (it was a little tough to read, especially with sunlight reflecting off the LCD screen -- or maybe that's just my aging boomer eyes).  But it was very user-friendly -- I think it was much quicker than a paper ballot would have been.

One last footnote:  In my speculation about the exit poll problems in 2004, I have given great weight -- probably too much, in retrospect -- to the presence of the network logos on the survey questionnaire, the exit poll "ballot box," and on the interviewer name tags.  Note that Melanie did not notice that network identification until she was holding the device in her hands.  I asked her specifically when she first noticed the logos. Her answer:

I noticed the logo after I had started to read the questions on the device -- it was about 2-3" square, at the top left of the device -- he didn't say anything about it and I just happened to notice it after I had read the first question or 2.  It was the usual CBS logo, but it wasn't a particularly bright color -- it sort of blended in to the dark gray or black color of the device, so it wasn't splashy. I assumed that was on purpose, as was his failure to tell me whose poll it was.

Of course, Melanie was just one respondent and amounts to a sample size of one.  Of course, as a colleague of mine once told me, the plural of anecdote is data.

**An MP source reports that Edison-Mitofsky, the company that does the pooled exit polls for the NEP consortium did not conduct the CBS/New York Times exit poll in Connecticut last week. 

Posted by Mark Blumenthal on August 16, 2006 at 06:49 PM in Exit Polls | Permalink | Comments (3)

August 14, 2006

Incumbent Rule Redux

Time to revisit "incumbent rule," thanks to Mickey Kaus who highlighted this observation last week by Michael Barone's column in U.S. News & World Report:

It may be time to revise one of the cardinal rules of poll interpretation--that an incumbent is not going to get a higher percentage in an election than he got in the polls. Lieberman was clocked at 41 and 45 percent in recent Quinnipiac polls; he got 48 percent in the primary election. The assumption has been that voters know an incumbent, and any voter who is not for him will vote against him. But the numbers suggest that Lieberman's campaigning over the last weekend may have boosted his numbers-or that the good feelings many Democratic voters have had for him over the years may have overcome their opposition to his stands on Iraq and foreign policy.

I wrote about the incumbent rule quite a bit in the run-up to the 2004 elections (especially here and here), applied it the polls in Ohio and then considered how the rule came up short (here and here).  Reconsidering the rule has been buried on my MP to-do list for some time, and while I lack the data to provide conclusive answers, today is as good as any to think out loud about some of the key issues involved.

The best known empirical assessment of this "cardinal" rule was written by Chicago pollster Nick Panagakis for the Polling Report in 1989.  He gathered 155 final polls spanning the period from the 1970s to 1988 (though most came from 1986 and 1988) and found that for 82% of the polls, the majority of the undecided broke to the challenger.  Note, that this statistic tells us how many polls showed undecideds breaking for challengers, not the proportion of the undecided voters that broke that way.

In September 2004, MyDD's Chris Bowers persuaded Panagakis to share his database and updated it with polls conducted from 1992 to early 2004.  Bowers took the process a step further, calculating the average split of the undecided vote over all the elections.  He noticed something obviously important in retrospect.  The incumbent rule seemed to be weakening (although he had little data from 1996):  80% of the undecided vote broke to challengers in the poll Panagakis collected between 1976 and 1988, but only 60% went to the challenger in the polls Bowers gathered between 1992 and the summer of 2004.   And challengers did worst of all in the polls in 2002 and the spring/summer of 2004 (42% to the incumbent, 58% to the challenger).

I have not attempted the same sort of comprehensive review of all of final polls from the fall of 2004, but on the final national presidential surveys an average of roughly 40% of the undecided vote broke toward challenger Kerry.  And the break of undecided voters in battleground states looks closer to 50/50.  "According to the exit polls," as Slate's David Kenner and Will Saletan pointed out, "Bush got 46 percent of those who made up their minds in the last week of the campaign and 44 percent of those who made up their minds in the final three days."

One question I have wondered about is whether the apparent weakening between the 1980s and 1990s could have been an artifact of the changes in the nature of pre-election polling or the particular races included in the database.  For example, did the 1990s see more polling in contests for Senate, Governor and local offices and less in presidential races?   Did long term changes in the timing or volume of pre-election polling affect the statistics? 

The more important question is why undecided voters have stopped breaking toward challengers in the final week of the campaign.  There are many theories. 

  • One possibility is that post 9/11 politics makes voters more reluctant to take a chance on challengers.   Are undecided voters more averse to change given the current emphasis on war and terrorism in our campaigns?   Some of the high profile Senate and Gubernatorial races saw a break favoring in incumbents in 2002 (though the incumbents were not exclusively Republican).  Consider also this bit of purely anecdotal evidence from MyDD's Matt Stoller:

I phone-banked a bunch of undecideds who in all likelihood flipped to Lieberman in the waning days of the campaign.  "I hate the war, I hate Bush, but I'm just not sure we can pull out right now" was the way they put it.

  • There is also the alternative theory Barone articulated in his column last week:

The left is noisy, assertive, in your face, quick to declare its passionate support. Voters on the right and in the center may be quieter but then stubbornly resist the instruction of the mainstream media and show up on Election Day and vote Republican, as they did in 2004, or for Lieberman, as some apparently did this week.

  • Or could this change reflect a change in the nature of campaigning?  Negative television ads were a rarity in the 1970s, but have grown increasingly commonplace in the years since.  Has the willingness of incumbents to "go negative" limited the ability of challengers to make the race a referendum on the incumbent and shifted the attention of late breaking voters to the alleged shortcomings of the challengers? 

Unfortunately, I have no answers tonight.  What is clear is that past trends are not much help in interpreting the pre-election polls of 2006.  How the undecideds will "break"in the final days of the 2006 campaign is anyone's guess.

UPDATE 8/15:   Readers have made a number of points worth reviewing in the comments section about possible shortcomings in the speculation above, as well as with the previous analysis of the incumbent rule.   One thing worth noting is that academic political scientists and survey researchers have devoted  little  if  any attention to the incumbent rule.  We certainly have a lot to learn about this "cardinal rule," despite its past popularity with campaign pollsters including yours truly.   

Posted by Mark Blumenthal on August 14, 2006 at 11:17 PM in Incumbent Rule, The 2006 Race | Permalink | Comments (13)

August 11, 2006

New Polls from Fox, AP

Apologies for slow posting since Wednesday - it's been a busy week.   Taking a step back from the Connecticut primary, two new national polls were released this week, and both show slightly lower job ratings for George Bush than most of the other national surveys released in late July or early August.   Do these indicate a new downturn for Bush?  Our friend Charles Franklin has crunched and graphed the numbers and says no.

The most recent Fox News/Opinion Dynamics poll (story, results) puts the Bush job rating at 36% approve, 56% disapprove.  However, as Franklin points out, the approval rating on this latest survey is exactly the same as their last poll fielded July 11-12, but the disapproval rating is three points higher.   His post goes into detail on the

The poll getting more attention today comes from AP/IPSOS and shows a drop in Bush's approval from 36% to 33%, "matching his low in May," as the AP story puts it.  Franklin takes a close look and concludes the latest AP result is a statistical outlier from the most recent poll trend.  That is, other national polls have shown Bush's job rating holding steady or slightly rising, while the AP result is a sharp departure.   He continues:

It is possible that the AP result signals a sharp break from the past. Given the track record of outliers in these data (over 1100 polls in all) that is not likely. Far more likely is that new polls will confirm that the trend has changed by modest amounts, either up or down, and that the next poll will be closer to 38.7% (both above and below) than to 33%.

This is not to say that the trend cannot change. We have seen three very clear examples of reversals in aproval trend since January 2005: in November 2005, February 2006 and May 2006. At some point approval may again trend down (or more sharply up, for that matter.) But it would not be a statistically good bet that the AP poll is where approval really stands right now

Read it all. 

 

Posted by Mark Blumenthal on August 11, 2006 at 05:00 PM in President Bush | Permalink | Comments (7)