Research & Data
Why Property Experts Disagree — And What the Data Actually Says
One expert says buy near the new train line. Another says watch population inflows. A third says vacancy rate is all that matters. A fourth says ignore all of that and just find rental yield above 5%. They all sound convincing. They all cite real data. And they all contradict each other.
Property experts disagree because they’re weighting different metrics — and the uncomfortable truth is that most of those metrics don’t survive contact with actual suburb-level outcomes. We tested them. Here’s what the data says.
They’re All Measuring Something Real
The frustrating part of conflicting property advice is that most experts aren’t wrong about the underlying logic. Infrastructure does affect property values. Population growth does signal demand. Rental yield does tell you something about cash flow. The experts aren’t lying — they’re just measuring things that are correlated with property growth at a broad level, then assuming that correlation holds at the suburb level over the time horizon that actually matters to an investor.
That assumption is where they diverge. Infrastructure affects whole corridors, over decades. Population growth is an ABS number that takes years to flow through. Building approvals are tracked at the local government level, not the suburb level. Each expert has picked a signal that works at some scale — just not the scale investors need.
The result: two experts can look at the same suburb, cite different figures, and reach opposite conclusions. And both feel justified because neither has tested their metric against real suburb outcomes across a large sample.
The core issue
A metric can be directionally right at a national or city level and still have near-zero predictive value at the suburb level. The question isn’t whether infrastructure matters to Australian property broadly. The question is: does it help you pick which suburb will outperform this cycle?
The Signals That Didn’t Survive
We tested the most commonly cited suburb-selection signals against real outcomes across a 78-suburb backtest — 28 suburbs that genuinely boomed, 50 controls that didn’t. The signals experts argue about most loudly were tested first.
Signals with loud expert advocates — backtest result
Infrastructure spending & new transport links
Affects property values over decades at a corridor level. Doesn’t predict which suburb booms in a 1–3 year investment horizon.
FAILEDPopulation growth & migration inflows
Too slow-moving and too coarse. ABS data doesn’t resolve at suburb level with the frequency investors need.
FAILEDBuilding approvals & new supply pipeline
Tracked at local government area level. Weak suburb-level signal, inconsistent with actual boom timing.
FAILED
Combined into a formula alongside other commonly cited indicators, these signals produced a backtest accuracy of 55%. That’s a coin flip. The formula tried to predict booms before they happened using the metrics experts most commonly recommend — and it couldn’t beat random chance.
This is why experts who specialise in infrastructure corridors will disagree with experts who focus on rental yield — and why both will sometimes look prescient and sometimes look wrong. They are each calibrating to a signal that isn’t actually predictive at the level they’re applying it.
There’s also a subtler problem: timing. Even when a broad signal is eventually correct (infrastructure does bring growth, one day), the gap between announcement and price movement can be a decade. An investor who buys on an infrastructure signal and sells after five years may never see that growth arrive. The expert who recommended it will have moved on to another call.
Why the disagreements persist
None of these experts are running controlled backtests across hundreds of suburbs with known outcomes. They’re using narrative cases — suburbs where their signal happened to align with growth. Those cases exist. They just don’t prove the signal works reliably across the full range of outcomes.
Skip the opinion. Get the backtested formula.
BoomAU scores 393 suburbs fortnightly using only signals that survived backtesting. Join the wishlist.
Two Signals With Real Evidence
After testing every signal experts commonly advocate, two things consistently separated suburbs that outperformed from those that didn’t.
Neither of them is what most experts argue about.
1. Affordability headroom
WORKSHow a suburb’s median price compares to its capital city median. Suburbs priced below the city median consistently outperform after accounting for the broad market tide. Suburbs priced above 1.5× the city median consistently underperform. This is the only cross-suburb ranking signal that survived testing — and every single boom in the 78-suburb backtest was led by a suburb priced well below the city median.
2. Boom timing via detection
WORKSInstead of trying to predict a boom before it starts, detect whether a boom is currently underway — and whether you are early enough to capture meaningful gains. A detection formula combining price momentum, market tightness (days on market, vacancy rate), and growth strength achieved 85.7% accuracy with zero false positives across 78 backtested suburbs. Detection typically catches a boom 6–12 months after it begins, still capturing 60–85% of total gains.
These two signals are the reason so many expert opinions conflict. The expert focused on infrastructure is measuring something different and at the wrong scale. The expert focused on population growth is looking at a slow-moving signal that doesn’t resolve at suburb level. Neither of them has zeroed in on affordability headroom or boom detection — and those are the only signals that reliably separated winners from losers in the data.
What Detection Actually Measures
The detection formula has five components. Three of them involve signals that experts do discuss — but only when combined into a detection formula rather than used in isolation as predictors.
| Component | Weight | What it measures |
|---|---|---|
| Momentum | 30% | Price growth acceleration |
| Growth Strength | 25% | Annual growth scored directly |
| Tightness | 20% | Days on market + vacancy rate |
| Sustainability | 15% | Rental yield + vacancy trend |
| Headroom | 10% | Price relative to capital city median |
Notice what’s absent: infrastructure spending, population growth, building approvals. These aren’t excluded out of stubbornness — they were tested and failed. The components above are what remained after that elimination process.
The formula also requires a suburb to pass four hard filters before it can be scored at all: annual growth above 5%, days on market below 45, vacancy below 2%, and median price below $800K. A suburb that doesn’t pass all four isn’t scored — which is why the 393 suburbs currently in the system represent only the fraction of Australian suburbs where genuine boom conditions are plausible right now.
The 20.2-point separation gap is important. The formula doesn’t just get the right answer most of the time — it gets it with enough margin that borderline cases are uncommon. When a suburb scores as a boom, it scores decisively.
Why days on market works here but fails in expert predictions
Experts often cite days on market as a leading indicator of where buyers are going next. That’s a prediction use — and it doesn’t work reliably. As a detection component, though, it’s measuring buyer urgency right now: are properties selling fast, confirming active demand? Same metric, different question, completely different predictive power.
393 suburbs. Scored fortnightly. No expert opinion required.
Strong Buy to Pass tiers based on backtested signals — not hotspot lists. Join the wishlist.
What the Tier Data Shows
The clearest rebuttal to conflicting expert opinions isn’t an argument — it’s the tier discrimination data. Across 12,360 postcode-months in a walk-forward backtest, four tiers of suburb scores produced four distinct bands of forward returns.
| Tier | Excess return | Beat market | n |
|---|---|---|---|
| Strong Buy | +7.5pp | 71% | 2,103 |
| Buy | +1.3pp | 55% | 3,349 |
| Watch | −0.7pp | 47% | 5,788 |
| Pass | −6.4pp | 28% | 1,120 |
Walk-forward backtest, 12,360 postcode-months. Excess return = suburb 12-month growth minus market median growth. Full methodology →
Perfectly monotonic. Strong Buy outperforms Buy, which outperforms Watch, which outperforms Pass. The spread between the top and bottom tier is 13.9 percentage points per year. That’s the cost of suburb selection when you use signals that don’t work.
An investor who follows expert advice calibrated to failed metrics could easily be buying suburbs that score as Pass. An investor who had access to the same data filtered through only the signals that survived — affordability headroom and detection — would be concentrating in Strong Buy suburbs. The difference, sustained across several years, is material.
Why Past Performance Makes It Worse
There’s one more reason experts disagree — and why following high-profile recommendations tends to disappoint. Mean reversion dominates Australian suburb data. The suburbs that outperformed last cycle tend to underperform the next one.
This matters because the most visible property advice is usually built around recent winners. A suburb that surged 40% over the past three years gets named in podcasts and articles. Buyers flow in. But the backtest shows that past outperformers are, on average, the wrong place to be looking next. The headroom has been consumed. The growth gap to the city median has narrowed. The tightness signals that preceded the boom have already been priced in.
The boom era also matters. Pre-2015, the median boom in Australian suburb data produced 1.3% excess returns. Post-2020, that figure was 16.2%. Experts who made their names calling booms in an era of large gains are applying frameworks built on a very different environment. The signals may look the same on paper, but the magnitude of outcome depends on the macro backdrop, not the suburb selection logic.
What this means practically
If an expert is recommending suburbs that have already run hard, they may be right that those suburbs had strong fundamentals — two years ago. Mean reversion means those same fundamentals now argue against them. Look for affordability headroom and fresh detection signals, not last cycle’s winners.
When the Numbers Lie
One reason expert opinions diverge even on individual suburbs: the underlying data can be unreliable, and not every expert checks for it.
Days on market figures become misleading when a suburb sells fewer than around 30 homes per year. With that few transactions, a single fast sale pulls the median DOM to 10 days. A single slow listing pushes it to 150. The figure is an echo of a handful of transactions — not a genuine measure of buyer urgency.
Experts who cite DOM figures for regional or outer-fringe suburbs without acknowledging transaction volume are working from a number that may simply be noise. Below around 15 annual sales, the figure is not usable as a signal at all.
This is another source of conflicting advice. Two experts can look at the same suburb’s DOM figure, one of them aware of the transaction volume problem and one of them not, and reach genuinely different conclusions — with both thinking the data supports them.
Who to Trust on Property Advice
The honest answer isn’t a name. It’s a methodology. Ask any property expert — including us — two questions before taking their advice:
1. How large was the backtest, and what was the outcome sample?
Case studies and narrative examples are not backtests. A real backtest defines suburbs with known outcomes in advance, applies the signal, and measures accuracy across the full sample — including the predictions that were wrong. If an expert can’t give you a sample size and an accuracy figure, they’re working from anecdote.
2. Does their signal work at the suburb level, not just the city or national level?
Population growth data, infrastructure spending, and corridor analysis all exist and are meaningful at broader scales. The question is whether they translate into suburb-level predictive power over a 1–3 year investment horizon. Most don’t. The signals that do — affordability headroom and detection — are measurable on free public data sources like SQM Research, CoreLogic via YIP, and Domain.
The signals that survived backtesting are not proprietary knowledge. You can check a suburb’s price against its city median on Domain. You can look up vacancy history at SQM Research for free. You can check annual growth and days on market at YIP. The hard part is doing it consistently, across hundreds of suburbs, every fortnight — catching the booms within weeks of starting before the headroom closes.
That’s what BoomAU automates. We score 393 suburbs fortnightly using only the detection and affordability signals that survived the backtest. The full methodology and the 78-suburb validation results are published on our proof page. No gating, no email required. Check the numbers yourself.
Join the Wishlist
We'll email you when BoomAU launches — starting with the budget range you care about.
Be first in line
- ✓Fortnightly Strong / Good / Fair / Weak signal labels per suburb
- ✓Filtered to your budget band
- ✓Built on a backtest of 12,360 postcode-months