Formula Journal

The 5 Metrics That Actually Predict Suburb Booms — And the Ones That Don’t

Every property podcast, every buyer’s agent webinar, every “top 10 hotspots” list throws around the same metrics: population growth, infrastructure spending, rental yield, days on market, vendor discounts. They sound convincing. Some of them are real signals. Most of them failed when we actually backtested them.

We built five versions of a suburb-scoring formula, backtested each against real Australian outcomes across 12,360 postcode-months, and threw away everything that didn’t survive. This is what we learned.

V1

The Kitchen Sink

We started where everyone starts. The metrics investors talk about at meetups. The numbers buyer’s agents put in their slide decks.

v1 formula — 7 factors

This formula tried to predictbooms before they started. Infrastructure projects, population inflows, tightening supply — the theory was that leading indicators could spot a boom 12–18 months out.

Backtest accuracy55%
FAILED — COIN FLIP

A coin flip with seven extra steps. Infrastructure spending doesn’t predict suburb-level booms — it affects whole corridors over decades. Population growth is too slow-moving to time an entry. Vendor discount data barely exists at the suburb level for free.

The biggest lesson: prediction is glamorous. But at suburb granularity with publicly available data, it doesn’t work.

Takeaway

Population growth, infrastructure, and building approvals are interesting context. But they have near-zero predictive power at the suburb level over investment-relevant time horizons.

Want to know which metrics survived?

We score 393 suburbs fortnightly using only the signals that passed backtesting. Join the wishlist.

V2

Stop Predicting. Start Detecting.

The pivot came from a simple question: what if we don’t try to predict booms before they happen? What if we just detect them once they’ve started?

At first this felt like cheating. Surely the whole point is to be early? But when we looked at actual boom trajectories across Australian suburbs, a pattern emerged: booms are multi-year events. Catching one 6–12 months after it starts still captures 60–85% of the total gains. With dramatically higher confidence.

We replaced prediction indicators with detectionsignals — metrics that answer “is this suburb booming right now?” instead of “will it boom?”

v2 formula — 5 factors

Backtest accuracy90%
False positives0%
Suburbs tested10
PASSED

90% accuracy. Zero false positives on 10 known-outcome suburbs. The formula correctly identified Stones Corner (QLD), Armadale (WA), Elizabeth (SA), and flagged controls like Southbank (VIC) and Darwin CBD as stable.

Takeaway

Days on market and vacancy rate dowork — but only as part of a detection formula, not a prediction formula. The same metric can be useful or useless depending on what question you’re asking.

V3

Scaling to 78 Suburbs

Ten suburbs isn’t enough to trust. We expanded the backtest to 78 suburbs — 28 that boomed, 50 controls that didn’t — and rebuilt the formula with harder data.

The biggest change: we replaced the Absorption component (which needed monthly sales and listings data that doesn’t exist for free) with Growth Strength — scoring annual growth directly. We also added a hard $800K median price cap. Every single boom in the expanded dataset was led by suburbs priced well below the city median.

Backtest accuracy85.7%
False positives0%
Separation gap20.2 points
Suburbs tested78
PASSED

Accuracy dropped from 90% to 85.7% — but on 8× more data. The 20.2-point separation gap between real booms and false signals means the formula doesn’t just get the right answer. It gets it with conviction.

We also discovered the nastiest silent bug in the formula: missing data defaulting to zero. If a suburb has no days-on-market figure and your parser fills in 0, it scores as the tightest market possible. If median price is missing and defaults to $0, headroom scores perfect. Invisible. Deadly. We caught it only because backtest results on sparse suburbs were suspiciously good.

Takeaway

Affordability is the strongest boom precondition. Every boom in the dataset was in a suburb priced well below its city median. If you only have time to check one thing, check the price gap.

We score 393 suburbs. Fortnightly.

Detection formula + affordability ranking, filtered to your budget band.

V4

The Statistical Illusion

This is the version we’re most glad we killed.

Detection tells you whethera suburb is booming. It doesn’t tell you which booming suburb will outperform the others. So we built a forecaster — a model that predicts 3-year forward capital growth with confidence intervals for every suburb.

Six input features: annual growth rate, affordability headroom, boom pattern signal, 5-year momentum, national regime indicator, and acceleration ratio. Walk-forward backtest across 28,049 scored rows.

The headline number looked fantastic.

Spearman rank correlation (pooled)0.42

That’s a strong signal. For context, most quantitative equity factors are happy with 0.05–0.10. We had 0.42. The model appeared to rank suburbs reliably.

Then we did something most backtests never do. We split the correlation within each scoring period.

Within-date rank correlation−0.08
80% confidence interval coverage46%
ABANDONED

Negative 0.08. Worse than random.

The pooled 0.42 was a statistical illusion. Boom years had both higher predictions and higher realised returns. The model was ranking time periods, not suburbs. Within any given month, it couldn’t tell you whether Armadale would outperform Elizabeth or vice versa.

The confidence intervals were just as broken. An 80% interval should contain the real outcome 80% of the time. Ours hit 46%. At the top decile of model confidence — the rows the model was most sure about — coverage dropped to 20%.

We deleted the entire forecaster. Every line of code.

Takeaway

Pooled correlation statistics can be dangerously misleading. Acceleration, momentum, and growth phase — the metrics most investors obsess over — did not predict which suburb would outperform within any given period. The market tide lifts all boats. Always test within-period discrimination, not just pooled.

V5

Two Signals. That's It.

After the forecaster failed, we went back to first principles. What actually discriminates between suburbs after you cancel the market tide?

We tested every feature we had: growth rate, acceleration ratio, repeat-boomer history, days on market, vacancy, yield, rental trend. We computed tide-cancelled excess returns — each suburb’s growth minus the median growth across all peers in the same period — and checked which features predicted outperformance.

One signal survived.

Affordability headroom

How a suburb’s median price compares to its capital city median. Suburbs priced below the city median consistently outperform after cancelling the tide. Suburbs priced above it consistently underperform. The effect is monotonic and survived every subsample we tested.

Combine that with the detection formula’s timing signal — is this suburb early in a detected boom, measured by how much of its affordability gap has already been consumed — and you get a two-axis system: when to buy (timing) and what to buy (affordability).

TierExcess returnBeat marketn
Strong+7.5pp71%2,103
Good+1.3pp55%3,349
Fair−0.7pp47%5,788
Weak−6.4pp28%1,120

Walk-forward backtest, 12,360 postcode-months, 2012–2026. No lookahead. Excess return = suburb 12-month growth minus market median growth. Full methodology →

Perfectly monotonic. Strong Signal outperforms Good Signal outperforms Fair Signal outperforms Weak Signal. The simplest version of the formula produced the strongest backtest.

The Scorecard

Five versions. Here’s what survived backtesting and what didn’t.

MetricVerdictWhy
Affordability headroomWORKSOnly cross-suburb ranking signal that survived tide cancellation
Boom timing (detection)WORKS85.7% accuracy, 0% false positives on 78-suburb backtest
Days on marketWORKS (in detection)Effective as tightness signal within the detection formula
Vacancy rateWORKS (in detection)Low vacancy confirms demand outstripping supply
Annual growth ratePARTIALWorks for detection (is it booming?), fails for ranking (which suburb wins?)
Acceleration ratioFAILEDDid not predict which suburbs outperform after cancelling the tide
5-year momentumFAILEDMean reversion dominates — past outperformers tend to underperform going forward
Population growthFAILEDToo slow-moving and coarse for suburb-level timing
Infrastructure spendFAILEDAffects corridors over decades, not suburbs over investment horizons
Building approvalsFAILEDState-level data, weak suburb-level signal
Vendor discountFAILEDData barely exists at suburb level from free sources

What You Can Do With This

You don’t need our formula. The two signals that matter are public knowledge:

1. Is the suburb affordable?

Look up the capital city median house price (Domain publishes this quarterly). Compare it to the suburb’s median. If the suburb is priced below the city median, it has headroom. If it’s above, the backtest says it’s less likely to outperform.

2. Is the suburb early in a detected boom?

Check annual growth (YIP / CoreLogic), days on market, and vacancy rate. If growth is above 5%, DOM is under 30 days, and vacancy is below 1.5%, you’re looking at a boom signature. The earlier in that cycle, the more upside remains.

That’s the honest version. Two signals. Both free to check. The hard part is doing it across 15,000 suburbs fortnightly, catching the booms within weeks of starting, and filtering out the noise from thin markets. That’s what we automate.

Full backtest methodology, the 78-suburb validation, and the walk-forward tier discrimination results are published on our proof page. No gating, no email required. Check the maths yourself.

Join the Wishlist

We'll email you when BoomAU launches — starting with the budget range you care about.

Be first in line

  • Fortnightly Strong / Good / Fair / Weak signal labels per suburb
  • Filtered to your budget band
  • Built on a backtest of 12,360 postcode-months