CLOSE ✕
Get in Touch
Thank you for your interest in MetaPoll. Please fill out the form below if you would like to get in touch with the MetaPoll team regarding partnerships, collaborations, press inquiries, or feedback.

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form

MetaPoll's methodology

MetaPoll's methodology for analysing Federal voting intention is an Australian first.

We bring together a broad range of data sources and apply our proprietary algorithms to make sense of them.


MetaPoll's work is independent and we are not commissioned by any political party.

Some of our beliefs and how they inform our methods

We believe in using the primary vote level data from other firms, not the two-party preferred figure: While one of our key outputs is a two-party preferred (TPP) figure, we do not derive this from TPPs published elsewhere. Rather, we use the primary vote figures published by other firms and perform adjustments at this level. After adjustments have been performed, we undertake our own distribution of preferences for Greens, minor party and independent votes to derive the TPP figure. This allows us to more accurately calculate a TPP number that is based on current preference flow estimates, not those at the 2013 election. For the upcoming election, we believe this is of particular importance to Greens preferences, which you can read more about here.

We believe pollsters get better over time: We correct for pollster bias based on previous election results (both individually and at a cohort level). But we also give the pollsters some credit for improving their methods. They do not operate in a vacuum, and our assumption is that if we or anyone else can see their biases in past elections, then so can they, and they will take steps to correct those. Therefore, we assume a partial correction factor on past biases.

We believe size matters, but not that much: We give weight for size, but not a linear weight. One of MetaPoll’s philosophies is to avoid publishing numbers based on sufficient underlying sample sizes that could be misleading when it comes to margins of error. In aggregating various polls, we do give weight for sample size, such that a poll of 4,000 respondents carries more weight than a poll with 1,000 respondents. However, our weighting is not a linear one, i.e. a poll of 4,000 does not have four times the weight as a poll of 1,000. This hybrid approach balances the competing priorities of ensuring that the results of certain pollsters to not overwhelming dilute others, and ensuring that superior size is rewarded.

We believe that voter intention remains largely unchanged within our survey periods: At this stage, we do not place a premium of recency within our survey range. To date, MetaPoll’s releases have been based on polling results and betting market data within a fixed time period. Within that time period, we do not weight for recency. For example, if our time period is the previous 30 days, a poll taken 5 days ago is not weighted any higher than a poll taken 25 days ago. This is based primarily on the differing release schedules of the different pollsters, in an attempt to avoid misweighting polls purely on the basis of the alignment (or misalignment) of their release dates and those of MetaPoll. We are experimenting with a half-life method that incorporates a data from longer time periods, and will post more in this soon.

When it comes to our own polls, we believe in quotas over weightings: Our own polls are conducted online using interceptive methods with quotas. Every attempt is made to survey a completely representative cross-section of the Australia population, using strict quotas, to avoid extrapolating from small subsegment sample sizes during later weighting.  Inevitably, however, very minor demographic deviations still remain and we correct for these by weighting on age, gender and location to match the Australian voting population. Corrections are also made to adjust for small known biases that exist in the online population relative to the voting population at large (which is a left-bias).

We believe in using the most granular data available: As mentioned above, we use the primary voting intention from other polls as our starting point instead of TPP figures. This principle of favouring more granular data is one we use everywhere. For example, we always use State-level polling data where available, and we always use electorate-by-electorate betting data where it is available.

We assume there are only two serious contenders for each seat: In looking at individual seats, our modelling assumes that there is only a front-runner and a single viable challenger, and that the probability of one of those two candidates winning the seat is 100%. Typically, this is the ALP and the Coalition, with The Greens and Independents showing up in a well-known handful of seats. So the person considered the third-most-likely to win a seat is not taken into account, nor is anyone further down the list. Thus our model discounts the rare outside chance that one of these people may achieve a surprise victory, an assumption that is backed up by past election results.

We don’t take much notice of individual polls in isolation: For purely statistical reasons around margins of error, we take any poll with less than 10,000 respondents with a pinch of salt, particularly given that the TPP figure often comes down to hairline variation from 50:50. We treat our own polling in the same way, and at present with our typical sample size of 2,000, we do not publish it in any form other than as part of our aggregate. This is not done to frustrate those who might like to aggregate or otherwise use our own primary data, but to ensure that we are not contributing to the same problem that we set out to solve with MetaPoll.

Lets Work Together
Contact US