Skip to content

Philippine Election Polls: Accurate Surveys or Bad Forecasts – Remember Truman!

April 27, 2010

Another survey result, another wide lead over whatever. Then there’s this presumptiveness and arrogance that if Noynoy Aquino wins, there automatically is massive cheating. Assuming a smooth, quick, and peaceful election – if the Bradley effect is not enough, there’s also the US 1948 Presidential Elections.

Philippine Election Polls: Accurate Surveys or Bad Forecasts – Remember Truman?

It was predicted that Thomas Dewey would defeat Harry S. Truman in the 1948 US presidential elections (just like Noynoy defeating Gordon or Villar).  Well, I don’t want to spoil the story if you don’t know it yet. For those who do, just read along anyway.

The site About.com provides a good narrative on what happened during the 1948  US Presidential elections. Here goes:

While completing Roosevelt’s term, Truman was responsible for making the fateful decision to end the war with Japan by dropping atomic bombs on Hiroshima and Nagasaki; creating the Truman Doctrine to give economic aid to Turkey and Greece as part of a containment policy; helping the U.S. make a transition to a peace-time economy; blocking Stalin’s attempts to conquer Europe, by instigating the Berlin airlift; helping create the state of Israel for Holocaust survivors; and fighting for strong changes toward equal rights for all citizens.

Yet the public and newspapers were against Truman. They called him a “little man” and often claimed he was inept. Perhaps the main reason for the dislike for President Truman was because he was very much unlike their beloved Franklin D. Roosevelt. Thus, when Truman was up for election in 1948, many people did not want to the “little man” to run.

Don’t Run!

Political campaigns are largely ritualistic…. All the evidence we have accumulated since 1936 tends to indicate that the man in the lead at the beginning of the campaign is the man who is the winner at the end of it…. The winner, it appears, clinches his victory early in the race and before he has uttered a word of campaign oratory.1
— Elmo Roper

For four terms, the Democrats had won the presidency with a “sure thing” – Franklin D. Roosevelt. They wanted another “sure thing” for the presidential election of 1948, especially since the Republicans were going to choose Thomas E. Dewey as their candidate. Dewey was relatively young, seemed well-liked, and had come very close to Roosevelt for the popular vote in the 1944 election.

And though incumbent presidents usually have a strong chance to be re-elected, many Democrats didn’t think Truman could win against Dewey. Though there were serious efforts to get famed General Dwight D. Eisenhower to run, Eisenhower refused. Though many Democrats were not happy, Truman became the official Democratic candidate at the convention.

Give ‘Em Hell Harry vs. The Polls

The polls, reporters, political writers – they all believed Dewey was going to win by a landslide. On September 9, 1948, Elmo Roper was so confident of a Dewey win that he announced there would be no further Roper Polls on this election. Roper said, “My whole inclination is to predict the election of Thomas E. Dewey by a heavy margin and devote my time and efforts to other things.”

Truman was undaunted. He believed that with a lot of hard work, he could get the votes. Though it is usually the contender and not the incumbent that works hard to win the race, Dewey and the Republicans were so confident they were going to win – barring any major faux pas – that they decided to make an extremely low-key campaign.

Truman’s campaign was based on getting out to the people. While Dewey was aloof and stuffy, Truman was open, friendly, and seemed one with the people. In order to talk to the people, Truman got in his special Pullman car, the Ferdinand Magellan, and traveled the country. In six weeks, Truman traveled approximately 32,000 miles and gave 355 speeches.3

On this “Whistle-Stop Campaign,” Truman would stop at town after town and give a speech, have people ask questions, introduce his family, and shake hands. From his dedication and strong will to fight as an underdog against the Republicans, Harry Truman acquired the slogan, “Give ’em hell, Harry!”

But even with perseverance, hard work, and large crowds, the media still didn’t believe Truman had a fighting chance. While President Truman was still on the road campaigning, Newsweek polled 50 key political journalists to determine which candidate they thought would win. Appearing in the October 11 issue, Newsweek stated the results: all 50 believed Dewey would win.

The Election

By election day, the polls showed that Truman had managed to cut Dewey’s lead, but all media sources still believed Dewey would win by a landslide.

As the reports filtered in that night, Truman was ahead in the popular votes, but the newscasters still believed Truman didn’t have a chance.

By four the next morning, Truman’s success seemed undeniable. At 10:14 a.m., Dewey conceded the election to Truman.

Since the election results were a complete shock to the media, the Chicago Daily Tribune got caught with the headline “DEWEY DEFEATS TRUMAN.” The photograph with Truman holding aloft the paper has become one of the most famous newspaper photos of the century.

Notes

1. Elmo Roper as quoted in David McCullough, Truman (New York: Simon & Schuster, 1992) 657.
2. Ibid 657.
3. John Hollister Hedley, Harry S. Truman: The ‘Little’ Man from Missouri (New York: Barron’s Educational Series, 1979) 183.

Can the Philippines 2010 Presidential Elections be a Repeat of the 1948 US Presidential Elections?

There is still a possibility that the Gordon campaign can build up like a steamroller in the next 10 days.

Thus, if Aquino loses due to a Gordon victory – I seriously disagree that there is massive cheating. The calculus just doesn’t add up. For one, Gordon does not have the money, machinery, and wherewithal to cheat.  For instance, all the political infomercials aired prior to the campaign period were blatant campaign violations – cheating. Who benefitted from the airing of all these ads? Exactamente – ABS-CBN😆 Cmon guys, maglolokohan pa ba ba tayo?

The winds changed overnight in the 1948 US Presidential elections – I don’t see any reason why it can’t happen in the Philippines. Truman won – not because of massive cheating.

It is also a stretch to compare Aquino to Truman because of the use of the word “little man” or the reference to being inept, given that Truman had a lenthy list of achievements unlike Noynoy.

Also, Aquino is more likely to say “let me get a consensus” instead of “the buck stops here” – the phrase is more consistent with Gordon’s persona.

Noynoy Aquino and the LP, like Dewey and the Republicans, are so confident they are going to win – barring any major faux pas – that they are make an extremely low-key campaign and avoiding debates – just like Erap Estrada in a previous presidential election.

10 days till election day (as of writing)- anything can still happen. Pollster.com points out the existence of divergent polls as well:

If a single survey with a sampling error of 3% (based on a 95% confidence level) shows Bush at 49%, we know with 95% confidence that if every voter in the country were interviewed for that survey, Bush’ support would lie somewhere between 46% and 52%. If the same survey has Kerry at 42%43%, his support could range from 39%40% to 46%. Thus, the 49% to 42% survey tells us with 95% confidence that the race could be anywhere from a dead heat to a 1412-point Bush lead.

There is the 95% – and there’s the 5%. Truman’s narrative belongs to the 5%  – and it can be the narrative of the best man for the job, too – Gordon.

Understand The Polls Some More

What are the chances that SWS and Pulse Asia can be wrong, or for that matter, how can the SWS, Pulse Asia, and other polling agencies be wrong? Here are some possibilities:

Potential for inaccuracy

Polls based on samples of populations are subject to sampling error which reflects the effects of chance and uncertainty in the sampling process. The uncertainty is often expressed as a margin of error. The margin of error is usually defined as the radius of a confidence interval for a particular statistic from a survey. One example is the percent of people who prefer product A versus product B. When a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey. If the statistic is a percentage, this maximum margin of error can be calculated as the radius of the confidence interval for a reported percentage of 50%. Others suggest that a poll with a random sample of 1,000 people has margin of sampling error of 3% for the estimated percentage of the whole population.

A 3% margin of error means that if the same procedure is used a large number of times, 95% of the time the true population average will be within the 95% confidence interval of the sample estimate plus or minus 3%. The margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of error to 1% they would need a sample of around 10,000 people[3]. In practice pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500–1,000 is a typical compromise for political polls. (Note that to get complete responses it may be necessary to include thousands of additional participators.)[4]

Another way to reduce the margin of error is to rely on poll averages. This makes the assumption that the procedure is similar enough between many different polls and uses the sample size of each poll to create a polling average.[5] An example of a polling average can be found here: 2008 Presidential Election polling average. Another source of error stems from faulty demographic models by pollsters who weigh their samples by particular variables such as party identification in an election. For example, if you assume that the breakdown of the US population by party identification has not changed since the previous presidential election, you may underestimate a victory or a defeat of a particular party candidate that saw a surge or decline in its party registration relative to the previous presidential election cycle.

Over time, a number of theories and mechanisms have been offered to explain erroneous polling results. Some of these reflect errors on the part of the pollsters; many of them are statistical in nature. Others blame the respondents for not giving candid answers (e.g., the Bradley effect, the Shy Tory Factor); these can be more controversial.

Nonresponse bias

Since some people do not answer calls from strangers, or refuse to answer the poll, poll samples may not be representative samples from a population. Because of this selection bias, the characteristics of those who agree to be interviewed may be markedly different from those who decline. That is, the actual sample is a biased version of the universe the pollster wants to analyze. In these cases, bias introduces new errors, one way or the other, that are in addition to errors caused by sample size. Error due to bias does not become smaller with larger sample sizes, because taking a larger sample size simply repeats the same mistake on a larger scale. If the people who refuse to answer, or are never reached, have the same characteristics as the people who do answer, then the final results should be unbiased. If the people who do not answer have different opinions then there is bias in the results. In terms of election polls, studies suggest that bias effects are small, but each polling firm has its own formulas on how to adjust weights to minimize selection bias.[6]

Response bias

Survey results may be affected by response bias, where the answers given by respondents do not reflect their true beliefs. This may be deliberately engineered by unscrupulous pollsters in order to generate a certain result or please their clients, but more often is a result of the detailed wording or ordering of questions (see below). Respondents may deliberately try to manipulate the outcome of a poll by e.g. advocating a more extreme position than they actually hold in order to boost their side of the argument or give rapid and ill-considered answers in order to hasten the end of their questioning. Respondents may also feel under social pressure not to give an unpopular answer. For example, respondents might be unwilling to admit to unpopular attitudes like racism or sexism, and thus polls might not reflect the true incidence of these attitudes in the population. In American political parlance, this a phenomenon is often referred to as the Bradley Effect. If the results of surveys are widely publicized this effect may be magnified – a phenomenon commonly referred to as the spiral of silence.

Wording of questions

It is well established that the wording of the questions, the order in which they are asked and the number and form of alternative answers offered can influence results of polls. For instance, the public is more likely to indicate support for a person who is described by the operator as one of the “leading candidates”. This support itself overrides subtle bias for one candidate, as is lumping some candidates in an “other” category or vice versa. 21st century Polling arms variate in complexity due to these circumstances.[7] Thus comparisons between polls often boil down to the wording of the question. On some issues, question wording can result in quite pronounced differences between surveys.[8][9][10] This can also, however, be a result of legitimately conflicted feelings or evolving attitudes, rather than a poorly constructed survey.[11]

A common technique to control for this bias is to rotate the order in which questions are asked. Many pollsters also split-sample. This involves having two different versions of a question, with each version presented to half the respondents.

The most effective controls, used by attitude researchers, are:

* asking enough questions to allow all aspects of an issue to be covered and to control effects due to the form of the question (such as positive or negative wording), the adequacy of the number being established quantitatively with psychometric measures such as reliability coefficients, and
* analyzing the results with psychometric techniques which synthesize the answers into a few reliable scores and detect ineffective questions.

These controls are not widely used in the polling industry.

Coverage bias

Another source of error is the use of samples that are not representative of the population as a consequence of the methodology used, as was the experience of the Literary Digest in 1936. For example, telephone sampling has a built-in error because in many times and places, those with telephones have generally been richer than those without.

In some places many people have only mobile telephones. Because pollsters cannot call mobile phones (it is unlawful in the United States to make unsolicited calls to phones where the phone’s owner may be charged simply for taking a call), these individuals will never be included in the polling sample. If the subset of the population without cell phones differs markedly from the rest of the population, these differences can skew the results of the poll. Polling organizations have developed many weighting techniques to help overcome these deficiencies, to varying degrees of success. Studies of mobile phone users by the Pew Research Center in the US concluded that “cell-only respondents are different from landline respondents in important ways, (but) they were neither numerous enough nor different enough on the questions we examined to produce a significant change in overall general population survey estimates when included with the landline samples and weighted according to US Census parameters on basic demographic characteristics.”[12]

This issue was first identified in 2004,[13] but came to prominence only during the 2008 US presidential election.[14] In previous elections, the proportion of the general population using cell phones was small, but as this proportion has increased, the worry is that polling only landlines is no longer representative of the general population. In 2003, a 2.9% of households were wireless (cellphones only) compared to 12.8 in 2006.[15] This results in “coverage error”. Many polling organisations select their sample by dialling random telephone numbers; however, there is a clear tendency for polls which included mobile phones in their sample to show a much larger lead for Obama than polls that did not.[16][17]

The potential sources of bias are:[18]

1. Some households use cellphones only and have no landline. This tends to include minorities and younger voters; and occurs more frequently in metropolitan areas. Men are more likely to be cellphone-only compared to women.
2. Some people may not be contactable by landline from Monday to Friday and may be contactable only by cellphone.
3. Some people use their landlines only to access the Internet, and answer calls only to their cellphones.

Some polling companies have attempted to get around that problem by including a “cellphone supplement”. There are a number of problems with including cellphones in a telephone poll:

1. It is difficult to get co-operation from cellphone users, because in many parts of the US, users are charged for both outgoing and incoming calls. That means that pollsters have had to offer financial compensation to gain co-operation.
2. US federal law prohibits the use of automated dialling devices to call cellphones (Telephone Consumer Protection Act of 1991). Numbers therefore have to be dialled by hand, which is more time-consuming and expensive for pollsters.

An oft-quoted example of opinion polls succumbing to errors was the UK General Election of 1992. Despite the polling organizations using different methodologies virtually all the polls in the lead up to the vote, and to a lesser extent exit polls taken on voting day, showed a lead for the opposition Labour party but the actual vote gave a clear victory to the ruling Conservative party.

In their deliberations after this embarrassment the pollsters advanced several ideas to account for their errors, including:

Late swing

Voters who changed their minds shortly before voting tended to favour the Conservatives, so the error was not as great as it first appeared.

Nonresponse bias

Conservative voters were less likely to participate in surveys than in the past and were thus under-represented.

The Shy Tory Factor

The Conservatives had suffered a sustained period of unpopularity as a result of economic difficulties and a series of minor scandals, leading to a spiral of silence in which some Conservative supporters were reluctant to disclose their sincere intentions to pollsters.

The relative importance of these factors was, and remains, a matter of controversy, but since then the polling organizations have adjusted their methodologies and have achieved more accurate results in subsequent elections.

For those who want to read up some more on polls, here are some links recommended by Harvard U:

From → Politics

22 Comments
  1. I watched the Truman movie, very inspiring. Magdilang anghel ka sana…

  2. The better way to gauge the survey if genuine is to see for yourself. I was home last march and the general sentiment then was reflective of the survey results.

  3. The Truman case is a laugh thing even if you’re not American. Survey people are sometimes so full of themselves they can actually kid themselves that they’re telling the truth even if they’re lying through their teeth.

  4. ChinoF,

    Being a marketing guy at a major corporation in the US, who has used research firms, I can tell you that professional survey and research firms are serious about their business, their instruments, their credibility, and the objectivity of their results. They provide crucial information that is available no other way. It is a challenging job. They are not full of themselves. Maybe in the Philippines there is a difference, and these marketing firms that couch their findings as research are different. But I can’t let you slander a profession, perhaps just because you don’t like the results they are giving you.

    Joe

  5. Don’t worry, I made sure “sometimes” meant that I’m not hitting every survey firm. Perhaps the partisan interests played into that particular Dewey-Truman survey, and they became too sure of it. Perhaps you’re right, it’s in this country at this time that the survey firms are playing into partisan interests almost openly. I hope that Dewey-Truman situation repeats itself in the May elections.

  6. Pinay Goddess permalink

    “But I can’t let you slander a profession, perhaps just because you don’t like the results they are giving you”.

    You hit the nail right on the head Joe…

  7. Yes, and in the aftermath of the Truman-Dewey saga, there was a specific set of flaws that were identified in the methods and sampling of the surveys that were showing Dewey way ahead; it’s complicated, but the basic problem was that the sample did not have a large enough variety of classes in it, and was skewed towards the people who would have naturally supported Dewey. So the surveys were honestly done, but they were inaccurate.

    Like Joe, I’ve also worked with professional research firms and I can say that they long ago took to heart lessons from episodes like the 1948 polls and avoid making the same mistakes, at least as much as they are able. I have not, however, worked with those firms here in this country, and I can’t vouch for their standards. I’m skeptical, and I get a sense that the surveys are favorable to the people who commission them, but I don’t know that to be a fact. The problem is the local firms are very close-mouthed most of the time about their methodology and who is ordering the surveys, which is partly their right for the sake of protecting their business and intellectual property, but doesn’t help build confidence in their results.

    The real point that everyone should be getting is that, even if a survey is conducted with extreme propriety and according to sound and accepted methodological principles, it is at best a snapshot of a particular moment. In marketing research and planning, the focus that is put on survey results is not on the individual outcomes of any surveys, but on the trend.

    A secondary point that everyone should realize is that the surveys are done from the perspective of utility to the users (those who ask for the survey to be done) rather than the subjects (the people who are asked the questions). They are not, by their basic design, meant to guide people in how they should vote — the fact that voters succumb to a bandwagon effect based on surveys actually makes the surveys less accurate and useful.

  8. Am a stickler for recycled electrons – so, here’s something from http://en.wikipedia.org/wiki/Opinion_poll, accessed , 4/28/2010

    Influence

    By providing information about voting intentions, opinion polls can sometimes influence the behavior of electors, and in his book The Broken Compass, Peter Hitchens asserts that opinion polls are actually a device for influencing public opinion.[27] The various theories about how this happens can be split up into two groups: bandwagon/underdog effects, and strategic (“tactical”) voting.

    A bandwagon effect occurs when the poll prompts voters to back the candidate shown to be winning in the poll. The idea that voters are susceptible to such effects is old, stemming at least from 1884;[28] reported that it was first used in a political cartoon in the magazine Puck in that year. It has also remained persistent in spite of a lack of empirical corroboration until the late 20th century. George Gallup spent much effort in vain trying to discredit this theory in his time by presenting empirical research. A recent meta-study of scientific research on this topic indicates that from the 1980s onward the Bandwagon effect is found more often by researchers.[29]

    The opposite of the bandwagon effect is the underdog effect. It is often mentioned in the media. This occurs when people vote, out of sympathy, for the party perceived to be “losing” the elections. There is less empirical evidence for the existence of this effect than there is for the existence of the bandwagon effect.[29]

    The second category of theories on how polls directly affect voting is called strategic or tactical voting. This theory is based on the idea that voters view the act of voting as a means of selecting a government. Thus they will sometimes not choose the candidate they prefer on ground of ideology or sympathy, but another, less-preferred, candidate from strategic considerations. An example can be found in the United Kingdom general election, 1997. As he was then a Cabinet Minister, Michael Portillo’s constituency of Enfield Southgate was believed to be a safe seat but opinion polls showed the Labour candidate Stephen Twigg steadily gaining support, which may have prompted undecided voters or supporters of other parties to support Twigg in order to remove Portillo. Another example is the boomerang effect where the likely supporters of the candidate shown to be winning feel that chances are slim and that their vote is not required, thus allowing another candidate to win.

    These effects indicate how opinion polls can directly affect political choices of the electorate. But directly or indirectly, other effects can be surveyed and analyzed on on all political parties. The form of media framing and party ideology shifts must also be taken under consideration. Opinion polling in some instances is a measure of cognitive bias, which is variably considered and handled appropriately in its various applications.

  9. Vox Populi permalink

    Don’t hold your breath. Mark my words. Gordon will lose in the May 2010 presidential election. The Philippines is not USA so the Dewey-Truman thingy will not happen. Majority of the Pinoy electorate are stupid moronic idiots who do not vote with their brains since they don’t have any. Also the corrupt politicians will see to it that they have mentally conditioned the electorate with their rigged surveys, or have bribed enough people to buy their votes, or have rigged the ballots to win the election.

  10. Studies have shown that people respond to surveys using different sets of thinking processes to the ones they use when evaluating real world options. Why do you think the greater proportion of product launches fail despite the battery of market testing and focus group discussions conducted? It’s because the dynamics of a real market place invokes different thought patterns from the ones fired up when one takes a “survey”.

    Same principle here.

  11. I may not like the results, but heck, I have the right to complain, don’t I, especially when those survey results are suspected of being contrived or exaggerated to serve vested interests.

  12. Pinay Goddess permalink

    Granting that all survey firms have hidden agenda to favor a certain candidate or the candidate of the group who commissioned them, how come all survey results of surveys conducted in different periods have almost similar results, with candidates maintaining the same ranking?

    Political survey results are usually used to determine which area a candidate is strong or weak (in terms of voters perception or preference), so their group can improve their campaign strategy. If a candidate is really weak, that will show in all results, regardless of who conducts the survey.

  13. Pinay Goddess permalink

    Granting that all survey firms have hidden agenda to favor a certain candidate or the candidate of the group who commissioned them, how come all survey results of national surveys conducted in different periods have almost similar results, with candidates maintaining the same ranking?

    Political survey results (especially during election time) are used to determine which area a candidate is strong or weak (in terms of voters’ perception or preference), so their group can improve their campaign strategy. If a candidate is really weak, that will show in all results, regardless of who conducts the survey.

  14. Not necessarily true – The SWS and Pulse Asia methodologies and results have been recently contested by another party which had divergent results.

  15. I rather think that the attention given to survey companies this year by you and others will apply good heat. If they understand the only thing they have to sell going forward is credibility, they will protect it and prove themselves with projections that are accurate within their stated tolerances

    Nice to have these informative blogs rather than Wowowee chatter about candidates . . . . Those down with Noynoy rants are the same song, same dance, day after day.

    Joe

  16. Pastor Art permalink

    If you vote according to your conscience and not popularity, you would not even look or rely on those polls. Even Jesus lost to Barabbas over a “snap election.”

    Cory Aquino is not corrupt, but did the Philippines cease to be corrupt? No, because the President wasn’t capable enough to provide people with an alternative to improve the economy, aside from other leadership shortcomings.

    Noynoy may be the sheep, but the wolf underneath him are the Liberal Party and the Oligarchs who do not really want change except only a change in who’s in power and in control. They prefer someone they want to control. Noynoy, who is not naturally self-driven, is perfect for that.

    May the Lord enlighten you for the sake of our country.
    -PA

  17. Jose Ramon Albert permalink

    Actually, the 1948 polls wrongly predicted Truman’s loss because they had SELECTION BIASES… not because sampling itself is wrong. (If you don’t believe in sampling, then the next time you go to a doctor, ask to have all your blood extracted, rather than just a blood sample!)

    Survey research was then only starting in 1948– they used a sample of telephone users which was not representative of the general population. Telephones were not yet widespread then, and these people tended to be prosperous Republicans who had a tendency to vote for Dewey…

    Reputable survey organizations in the US and in the Philippines actually use random sampling!!! (They involve what we call as a two stage design: selecting sample spots randomly, and randomly select households in the selected sample spots)…

    Are surveys 100% precise and 100% accurate! Nope, but they are precise to a margin of error… 2.5 percentage points for a sample of size 1600 (with 95% confidence) …

    If the “lead” of someone is way beyond, 5 percentage points for a sample of 2000, and this is consistently the result, I would safely conclude that I know who the next President will be….

    Too bad some politicians (and passionate followers of those in the number 3, 4, 5, … slots ) are just hoping against hope that surveys are wrong…

  18. Jose Ramon Albert permalink

    Actually, the 1948 polls wrongly predicted Truman’s loss because they had SELECTION BIASES… not because sampling itself is wrong. (If you don’t believe in sampling, then the next time you go to a doctor, ask to have all your blood extracted, rather than just a blood sample!)

    Survey research was then only starting in 1948– they used a sample of telephone users which was not representative of the general population. Telephones were not yet widespread then, and these people tended to be prosperous Republicans who had a tendency to vote for Dewey…

    Reputable survey organizations in the US and in the Philippines actually use random sampling!!! (They involve what we call as a two stage design: selecting sample spots randomly, and randomly select households in the selected sample spots)…

    Are surveys 100% precise and 100% accurate! Nope, but they are precise to a margin of error… 2.5 percentage points for a sample of size 1600 (with 95% confidence) …

    If the “lead” of someone is way beyond, 5 percentage points for a sample of 2000, and this is consistently the result, I would safely conclude that I know who the next President will be….

    Too bad some politicians (and passionate followers of those in the number 3, 4, 5, … slots ) are just hoping against hope that surveys are wrong…

  19. Jose Ramon Albert permalink

    You should go beyond the results, and ask HOW THE DATA was generated… It is good to be suspicious, but you should be able to discern …. Some of the “reputable” institutions in the Philippines became reputable because of their track record … Some people like Kit Tatad claim that the Aquinos connived the results. Sure, Pulse Asia is owned by Rapa Lopa, and its incorporators were part of the SWS original incorporators… but actually, SWS is an independent institution already from Pulse Asia, and in fairness to Pulse Asia, Rapa has not put his hands on operations over the past two decades of its existence…
    Sometimes the credibility of results of a survey can not be divorced from what we think is the truth… But the best way to judge the results would be to look at the specific way these organizations have done their surveys, and to look at their track records…

  20. Jose Ramon Albert permalink

    The survey outfit that questioned SWS and Pulse Asia, does it have a track record? The answer is NO… don’t believe a survey result just because it jibes with what you want reality to look like…

    This survey organization criticized SWS and Pulse Asia methodologies, but didn’t give specifics how they did their survey! And after they came up with their results (and made money from your candidate), it is now no longer conducting surveys… Hmmmm…

  21. Jose Ramon Albert permalink

    None of us has a monopoly of truth and may the Lord enlighten us all!

  22. Crox permalink

    Stupid article..

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: