Labor win was always a sure bet

nid), array('absolute' => TRUE)); $title = $entity->title; ?>

Dr Denis Muller

Senior Lecturer
Centre for Advancing Journalism
University of Melbourne

Spring Carnival 1, State Politics nil.


It is a good bet that the Victorian electorate had long ago made up its mind to get rid of the inert Baillieu-Napthine Government and had turned its collective attention to the more entertaining and unpredictable joys of the track.


Nothing of significance happened to the trend in the public opinion polls between June and election day, except that in July and September the incumbent Government got within a couple of percentage points of Labor, only to see the gap widen again to about four points. In round numbers, that looks to be about where it is going to finish.


We won’t know the final two-party-preferred result for a week or two, but the election-night estimate by the ABC’s Antony Green indicated something in the region of 52% to Labor and 48% to the Coalition. Later estimates make it a little closer –51.5% to 48.5% -- and that looks like the ballpark result.


The opinion polls performed competently on the whole. At no stage did any poll have the Liberal-National Coalition ahead on the two-party-preferred vote, although there were some fairly wild variations in the estimates.


AC Nielsen and Ipsos, who took over the Fairfax polling from them, had Labor ahead by an improbable 56%-44% on three occasions between June and November. Newspoll had Labor ahead by a scarcely less improbable 55%-45% in August. Galaxy was steady throughout at 52%-48%. This in fact is where nearly all the main polls ended up in the surveys taken in the final week of the campaign. Only Morgan, at 50%-50%, was off the mark.


Here is a summary of the primary vote estimates by the polls in their final surveys, compared with the progress count by the Victorian Electoral Commission (as at 3 December with 55.8% of the vote counted):









































There was a tendency to over-estimate the Greens vote. Indeed Ipsos-Fairfax at one point had it at 16%. Except for Morgan, there was a tendency to under-estimate the Coalition vote; Galaxy, Newspoll and ReachTEL all got pretty close to the Labor vote.


Given that most of these polls are based on sample sizes of between 1000 and 1500, which yields a sampling variance of between about 2.8% and 3.2%, plus or minus, the results are somewhat mixed.


Using the most generous margin (plus or minus 3.2%), this is how the polls performed:


The estimates for Labor are within variance (in Ipsos-Fairfax’s case, barely). The estimates for the Coalition are within variance for Ipsos-Fairfax and Morgan, but barely for the other three. Only Newspoll and Galaxy got within variance for the Greens estimate, with Ipsos-Fairfax, Morgan and ReachTEL clearly outside.


The evidently accurate estimates of the two-party-preferred vote will give the pollsters comfort, but behind that good result lies some difficult questions, especially why it is that the Greens vote was so consistently over-estimated. The large variations in the two-party preferred estimates will also be a matter for reflection, especially by Ipsos, which was in its first election for Fairfax .


Central to this is the method used to calculate the distribution of preferences. Some polls used historical data from the 2010 election; others used data from their contemporary questionnaires about how voters said they would distribute their preferences. There is always a question about how reliable this second method is, since it is reasonable to assume a lot of voters only really think about that level of detail when they get into the polling booth.


Polling is getting tougher by the day as people abandon fixed line phones for mobiles, making it harder to assemble a random sample because of the absence of a comprehensive mobile phone directory from which to draw the sample. On top of that, unsolicited calls are either filtered out electronically or hung up on, and the frequency of market-research surveying causes increasingly high refusal rates. While the polling companies take steps to counter these technological and sociological shifts, the work is more costly, complex and time-consuming than it used to be.


Since these polls are, for the most part, commissioned by large media organisations, and since those organisations are themselves confronting severe financial disruption caused by the digital revolution, keeping up good polling standards is a real challenge.


Yet without good quality media-sponsored polls, voters would be at the mercy of selective leaking of the political parties’ private polling. While this work – like the media polls – is usually done to a high professional standard, the public is unlikely to see the actual questions asked, the sampling details, the timing or the detailed analysis.


This lack of transparency, and the certainty that the parties would put the most favourable possible spin on the data, means that they are no substitute for what the media polls give us: a generally reliable indication of how the parties stand in public estimation.


Any loss of that information would be a subtraction from what the media offer our democracy. 


Dr Denis Muller is a senior research fellow in the Centre for Advancing Journalism at the University of Melbourne and one-time associate of Irving Saulwick in conducting the Saulwick Poll for The Age and The Sydney Morning Herald.