Research

Property Market Forecast Australia 2026 — Why the National Number Doesn’t Matter

Every bank, every research house, and every property commentator publishes an Australian property market forecast this time of year. ANZ says prices will rise X%. Domain says Y%. CoreLogic points to affordability headwinds. The RBA watches the labour market.

Here is the problem: none of those forecasts tell you which suburb to buy in. And the difference between the right suburb and the wrong one, in the same city, in the same year, is 13.9 percentage points of annual return. A national forecast that lands within 2–3% of the real number is still useless for suburb selection.

We know this because we tried to build a forecaster. A proper one — walk-forward backtested, confidence-interval calibrated, trained on 28,000+ rows of suburb data. It looked extraordinary. Then we did one extra check that most models never do, and it collapsed completely. This is what we learned.

The Tide Problem

When the Australian property market rises, it broadly rises everywhere. When it falls, it broadly falls everywhere. The national market cycle — driven by interest rates, credit conditions, employment, and migration — is a tide that lifts or lowers all boats simultaneously.

This is the reason national forecasts exist and why they are simultaneously right and useless. If the RBA cuts rates and mortgage serviceability improves, the market rises. That is a valid macro call. But it tells you nothing about whether the suburb you are considering will outperform or underperform the suburbs you aren’t.

Our backtesting confirmed this directly. Growth phase — whether the national or city market is rising or falling — does not predict relative suburb outperformance. The tide is the tide. What you need is something that tells you which suburb rides the tide higher than its neighbours.

Pre-2015 median boom size1.3%
Post-2020 median boom size16.2%

Boom size is era-dependent. The tide determines the size of the wave. Suburb selection determines how high you ride it.

The same market cycle produced a 1.3% median boom pre-2015 and a 16.2% median boom post-2020. The suburbs that boomed in both eras had something in common. The national forecast for either period wouldn’t have told you what it was.

Takeaway

National forecasts are macro calls about the tide. They cannot tell you which suburb will outperform within a given period. That is a different question requiring a different tool.

Detection beats prediction.

BoomAU scores 393 suburbs fortnightly using backtested signals — not national forecasts. Join the wishlist.

The Forecaster We Built — and Why We Deleted It

Our detection formula was working well. It could identify whether a suburb was currently in a boom with 85.7% accuracy across 78 backtested suburbs. But detection is binary — boom or no boom. It does not rank booming suburbs against each other.

So we built a forecaster. The goal was to predict 3-year forward capital growth for every scored suburb — not just detect the current state, but rank which suburb would outperform the others. Six input features: annual growth rate, affordability headroom, boom pattern signal, 5-year momentum, a national regime indicator, and an acceleration ratio. Walk-forward backtest across 28,049 scored rows. No lookahead. The model never saw future data when making past predictions.

The headline result looked exceptional.

Forecaster v1 — initial results

Spearman rank correlation (pooled)0.42

A pooled IC of 0.42 is remarkably strong. Most quantitative equity factors consider 0.05–0.10 a genuine signal. We had more than four times that. The model appeared to be ranking suburbs with real conviction.

We were about to publish it. Then we ran one more check.

Instead of computing the rank correlation across all rows pooled together, we split it within each scoring date. At any given month in the backtest, how well did the model rank suburbs against each other within that period?

Forecaster v1 — within-date results

Within-date rank correlation\u22120.08
80% CI coverage (should be 80%)46%

Worse than random within any given month.

Negative 0.08. Within any single month, the model was actively mis-ranking suburbs. The pooled 0.42 was a statistical illusion.

What happened: boom years had both higher model predictions and higher realised returns across the board. The pooled correlation was picking up the shared time-period effect — the model was ranking years, not suburbs. It could tell you that 2021 was a better year than 2019 to hold property. It could not tell you whether suburb A would outperform suburb B in either year.

The confidence intervals confirmed the problem from a different angle. An 80% confidence interval should contain the true outcome 80% of the time. Ours hit 46%. At the top decile — the rows the model was most confident about — coverage dropped to 20%. The model was most wrong precisely when it was most certain.

We deleted the entire forecaster. Every line of code. The pooled IC of 0.42 would have been a compelling marketing number. The within-date IC of −0.08 made it fraud.

Takeaway

Pooled correlation statistics are the most dangerous metric in backtesting. They absorb time-period effects and can make a useless model look like a breakthrough. Always test within-period discrimination before publishing any forecast. The growth phase does not predict relative suburb outperformance — only tide direction.

We only publish what survived backtesting.

The forecaster is gone. What remains is an 85.7%-accurate detection formula and the one ranking signal that held up. Join the wishlist.

What Backtesting Actually Found

After deleting the forecaster, we went back through our full dataset — 12,360 postcode-months — and asked a simpler question: what distinguishes suburbs that outperform their peers within the same period, after you cancel the market tide?

We tested every input we had. Growth rate, acceleration, 5-year momentum, days on market, vacancy rate, rental yield, yield trend. We computed tide-cancelled excess returns: each suburb’s 12-month growth minus the market median growth across all peers in that period. One signal stood out.

Signal 1: Affordability headroom

How a suburb’s median price compares to its capital city median. Suburbs priced below the city median consistently outperform after cancelling the tide. Suburbs priced above 1.5 times the city median consistently underperform. The effect is monotonic and held across every subsample tested. It is the only cross-suburb ranking signal that survived tide cancellation.

Signal 2: Boom timing via detection

Is the suburb currently in a detected boom? And if so, how early? Detection catches booms 6–12 months after they start, which still captures 60–85% of total gains. Early detection — measured by how much of the affordability gap has already been consumed — identifies the highest-upside entry points.

Every single boom in our 78-suburb backtest was led by a suburb priced well below its capital city median. The $800K price cap in our formula is not arbitrary — it reflects this finding directly. Every suburb above that threshold failed the affordability precondition.

The metrics that did notsurvive: 5-year momentum (mean reversion dominates — past outperformers tend to underperform going forward), infrastructure spending (affects corridors over decades, not suburbs over investment horizons), population growth (too slow-moving and too coarse), and building approvals (state-level data with weak suburb-level signal).

These are the metrics that national forecasts are built on. They matter for the macro picture. They do not predict which suburb outperforms.

Takeaway

Mean reversion dominates past performance signals. Infrastructure and population data are too coarse for suburb-level timing. Affordability headroom is the one cross-suburb ranking signal that survived. Detection beats prediction because it measures what is actually happening, not what is hypothesised to happen next.

The 13.9-Point Spread

The practical consequence of everything above shows up in the tier discrimination numbers. Our walk-forward backtest across 12,360 postcode-months produced four tiers with perfectly monotonic returns.

TierExcess returnBeat marketn
Strong+7.5pp71%2,103
Good+1.3pp55%3,349
Fair−0.7pp47%5,788
Weak−6.4pp28%1,120

Walk-forward backtest, 12,360 postcode-months. No lookahead. Excess return = suburb 12-month growth minus market median growth. Full methodology →

Strong Signal averaged +7.5pp excess return. Weak Signal averaged −6.4pp. That is a 13.9 percentage point spread between the best and worst tier — in the same market, in the same year, across the same national conditions that every property forecast is trying to predict.

The tier discrimination is perfectly monotonic: every step down in tier produces a step down in excess return and a step down in the percentage of months that beat the market median. There is no noise in the ordering. This is what genuine signal looks like — unlike the forecaster, which produced a convincing pooled number and a meaningless within-date number.

The takeaway for 2026 or any other year: the national forecast determines the water level. It does not tell you who swims and who sinks. The 13.9-point spread between Strong Signal and Weak Signal suburbs exists regardless of what the RBA does or what ANZ puts in its annual outlook. Finding the right suburb matters more than timing the national cycle.

How Detection Actually Works

The detection formula — v2.3, the version currently running in production — has five components, all of which earned their place in the backtest.

ComponentWeightWhat it measures
Momentum0.30Price growth acceleration
Growth Strength0.25Annual growth scored directly
Tightness0.20DOM + vacancy rate
Sustainability0.15Rental yield + vacancy trend
Headroom0.10Price relative to capital city median

A suburb only gets scored if it passes all four hard filters first: annual growth above 5%, days on market at or below 45, vacancy rate at or below 2%, and median price at or below $800K. Failing any one of these gates means no score — the suburb is not on the radar regardless of what other metrics say.

The resulting score has four bands: 80 or above is a detected boom. 65–79 is early boom. 50–64 is warming. Below 50 is no signal. At 85.7% accuracy across 78 backtested suburbs — 28 known boomers and 50 controls — with a 20.2-point separation gap between genuine booms and false signals and zero false positives, this is the formula we trust.

Currently, 393 suburbs pass the hard filters and are actively scored each fortnight: 35 under $400K, 149 under $600K, and 204 under $800K.

Takeaway

Detection is not the same as forecasting. It asks “is this suburb booming right now?” rather than “will it boom?” That shift in question is why accuracy jumped from 55% (prediction) to 85.7% (detection) across backtesting. The right question is worth more than a sophisticated model answering the wrong one.

What to Do With This in 2026

Read the national forecasts if you want macro context. The RBA rate path, credit availability, and migration trends all shape the water level. But treat them as background, not as suburb-selection tools. For that you need a different approach.

1. Start with the affordability filter

Look up the capital city median (Domain publishes this quarterly). Every suburb you consider should be priced below that median. If it is above 1.5 times the city median, the backtest says it will underperform its peers after the tide cancels out. This is not a soft preference — it is the only cross-suburb ranking signal that survived the full backtesting process.

2. Check the detection signals

For any suburb that passes the affordability check: is growth above 5%? Is days on market under 45 days? Is vacancy below 2%? If all three are true, you are looking at a boom signature. From there, the earlier in that cycle, the more upside remains — detection catches booms 6–12 months after start, still capturing 60–85% of total gains.

3. Ignore past outperformers

Mean reversion dominates. Past outperformers tend to underperform going forward. The suburb that boomed 20% last year is not the one to buy today — it has consumed its affordability headroom. The suburb that has not yet moved, but is showing the early detection signals, is where the backtest points.

That is the honest version of a “property market forecast australia 2026” that is actually useful: not a national number, but a suburb-level detection framework built on the two signals that survived backtesting. Infrastructure spending, population growth, building approvals, vendor discounts — none of them made it through. Affordability headroom and boom detection did.

The hard part is doing this at scale. Checking growth rate, DOM, vacancy, and price-to-city-median across thousands of suburbs fortnightly — catching booms within weeks of starting and filtering out noise from thin markets — is what we automate. The full backtest methodology, the 78-suburb validation, and the tier discrimination results are on our proof page. No gating, no email required.

Join the Wishlist

We'll email you when BoomAU launches — starting with the budget range you care about.

Be first in line

  • Fortnightly Strong / Good / Fair / Weak signal labels per suburb
  • Filtered to your budget band
  • Built on a backtest of 12,360 postcode-months