Formula Journal
Why Property Hotspot Lists Are Wrong — And What Actually Works
You’ve seen them. “Top 10 suburbs to watch in 2026.” “The next property hotspots revealed.” Every buyer’s agent, every property magazine, every podcast host has a list. They sound data-driven. They cite population growth, infrastructure spending, rental yields. They feel authoritative.
But here’s the question nobody asks: how accurate are they?Where’s the backtest? Where’s the track record? We went looking for the evidence. Then we built our own hotspot formula and tested it. The results were humbling.
Hotspot Lists Are Marketing, Not Analysis
The first thing to understand about property hotspot predictions is their business model. Most hotspot lists exist to sell something — a subscription, a buyer’s agent service, a course, a development package. The list is the lead magnet. The accuracy of the list is irrelevant to the revenue model.
This doesn’t mean the people writing them are dishonest. Many genuinely believe in their methodology. But the incentive structure explains why almost nobody publishes their historical accuracy. If your 2023 hotspot list underperformed the market, you don’t publish a retraction — you publish a 2024 list.
Think about it this way: if a hotspot list were genuinely accurate, the publisher would lead with their track record. “Our 2022 picks averaged 14% growth versus 6% for the broader market.” You’d see that on every landing page. Instead, you see methodology descriptions — impressive-sounding inputs with no output verification.
The test
Next time you encounter a hotspot list, ask one question: “What was this list’s accuracy last year?” If the answer isn’t published — or if the question is deflected into methodology talk — you have your answer.
The Standard Hotspot Formula (55% Accurate)
We know the standard methodology well because we built it ourselves. When we started BoomAU, our first formula used the same inputs everyone uses: population growth, infrastructure spending, building approvals, rental yield, days on market, vacancy rate, and vendor discounts.
The logic is intuitive. More people moving in means more demand. New infrastructure means better amenity. Low vacancy means landlords have pricing power. It all makes sense in a slide deck.
Standard hotspot inputs
- •Population growth and interstate migration
- •Infrastructure spending (rail, road, hospitals)
- •Building approvals and new supply pipeline
- •Rental yield and vacancy rates
- •Days on market
- •Vendor discount from asking price
- •Median price trends
Then we did something most hotspot publishers never do. We backtested it.
Essentially a coin flip. Seven carefully chosen inputs, months of data collection, and the formula could barely beat random selection.
Fifty-five percent. Barely better than flipping a coin. With seven inputs, months of data sourcing, and a methodology that sounds bulletproof to anyone who hasn’t tested it.
Tired of coin-flip accuracy?
We score 393 suburbs fortnightly using only the signals that survived backtesting. Join the wishlist.
Why These Inputs Fail at Suburb Level
Each hotspot input fails for a specific reason. Understanding why is more useful than just knowing that they fail.
Infrastructure spending
Infrastructure affects entire corridors over decades, not individual suburbs over investment horizons. The announcement of a new train line doesn’t tell you which of the 15 suburbs along it will outperform. Often, the price impact is priced in long before construction starts. And state government infrastructure data is too coarse to be useful at suburb level.
Population growth
Population growth is a multi-decade force. It’s too slow-moving and too diffuse to time a suburb-level entry point. Melbourne has had strong population growth for 20 years. That tells you nothing about whether Sunshine will outperform Footscray this year.
Building approvals
Published at state or LGA level, not suburb level. The signal is too coarse. And high approvals can be bearish (more supply dilutes demand) just as easily as bullish (confirms developer confidence). The direction depends on context that the raw number doesn’t capture.
Vendor discount
In theory, shrinking vendor discounts signal a tightening market. In practice, reliable vendor discount data barely exists at suburb level from publicly available sources. Most hotspot lists cite this metric without acknowledging that their data source covers a fraction of transactions.
The pattern
These inputs aren’t wrong in the abstract. Population growth does drive property demand. Infrastructure does improve amenity. But at the suburb level, over investment-relevant timeframes, they don’t have enough signal to pick winners. They’re explanatory, not predictive.
The Statistical Illusion Trap
It gets worse. Even when hotspot models appear to produce impressive statistics, the numbers can be deeply misleading.
We learned this the hard way. After our prediction formula failed at 55%, we built a forecaster — a model that tried to rank which suburbs would outperform others. We backtested it across 28,049 scored rows and got what looked like a breakthrough result.
For context, most quantitative equity factors are happy with 0.05–0.10. We had 0.42. It looked exceptional.
A 0.42 rank correlation is enormous in quantitative research. We were ready to ship it. Then we split the statistic within each scoring period — asking not “does the model rank well across all time?” but “within any given month, does the model rank suburbs correctly?”
Negative. Worse than random. The model couldn’t distinguish which suburb would outperform which within any given period.
The pooled 0.42 was a statistical illusion. Boom years had both higher predictions and higher realised returns. The model was ranking time periods, not suburbs. Within any given month, it couldn’t tell you whether Suburb A would outperform Suburb B. It was useless for the decision that actually matters.
This is the trap. Aggregated statistics pool data across time and space. They can produce genuinely impressive numbers that collapse the moment you ask the question an investor actually needs answered: “Right now, which suburb should I choose?”
Any hotspot model that publishes pooled accuracy without within-period accuracy is likely hiding this problem — or, more charitably, has never checked.
Takeaway
Always ask: does this accuracy statistic hold withina given period, or only when pooled across years? A model that ranks time periods isn’t a suburb-picker — it’s a market-cycle detector pretending to be one.
We killed what didn't survive backtesting
Two signals. 85.7% accuracy. 0% false positives. Join the wishlist to see the scores.
The Mean Reversion Problem
Many hotspot lists lean heavily on momentum: suburbs that have been growing strongly are predicted to keep growing. “This suburb grew 12% last year, so it’s a hotspot.”
The problem is mean reversion. In our backtesting, past outperformers tended to underperform going forward. Five-year momentum — which sounds like exactly the kind of robust, long-term signal you’d want — failed as a ranking signal after we cancelled the market tide.
This makes intuitive sense once you think about it. A suburb that has already boomed has consumed its affordability headroom. The gap between its median price and the city median has narrowed. The easy gains are behind it. Momentum-based hotspot lists are often recommending suburbs at the end of their run, not the beginning.
The opposite signal — affordability — turned out to be far more predictive. But recommending cheap suburbs doesn’t sell courses nearly as well as recommending the suburbs everyone is already excited about.
What Actually Works: Detection Over Prediction
After our prediction formula hit 55%, we made a fundamental shift. Instead of trying to predict booms before they start, we built a formula to detectbooms after they’ve begun.
At first this felt like giving up. Surely the whole value is in being early? But when we studied actual boom trajectories across Australian suburbs, we found that booms are multi-year events. Detecting one 6–12 months after it starts still captures 60–85% of the total gains — with dramatically higher confidence.
| Approach | Accuracy | False positives | Suburbs tested |
|---|---|---|---|
| Prediction (standard hotspot formula) | 55% | Not measured | — |
| Detection (BoomAU formula) | 85.7% | 0% | 78 |
78-suburb backtest: 28 that boomed, 50 controls that didn’t. Walk-forward validation, no lookahead bias. Full methodology →
85.7% accuracy with zero false positives. The detection formula correctly identified real booms and — crucially — never flagged a stable market as booming. False positives are what cost investors money, and our detection approach produced none across 78 tested suburbs.
Two Signals Survived. That’s It.
We tested every feature we could find: growth rate, acceleration ratio, repeat-boomer history, days on market, vacancy, yield, rental trend, momentum. We computed tide-cancelled excess returns — each suburb’s growth minus the median growth across all peers in the same period — and checked which features predicted outperformance.
Two signals survived.
1. Affordability headroom
How a suburb’s median price compares to its capital city median. Suburbs priced below the city median consistently outperform after cancelling the market tide. Suburbs priced above it consistently underperform. The effect is monotonic and survived every subsample we tested.
2. Boom timing (detection signal)
Is the suburb early in a detected boom? Measured by growth acceleration, days on market, and vacancy rate together. Not “will it boom?” but “is it booming now, and how early are we?” Detection, not prediction.
That’s the honest answer. Out of dozens of metrics tested across 12,360 postcode-months, two survived. Everything else either failed outright or produced statistical illusions that collapsed under within-period analysis.
The uncomfortable truth
Most of what the property industry sells as “data-driven analysis” has never been backtested. The metrics that sound the most impressive — infrastructure, population, momentum — are the ones that fail. The metric that actually works — affordability — is the one nobody wants to hear because “buy cheap suburbs” doesn’t sell seminars.
What You Can Do Instead
You don’t need our formula. The two signals that matter are public knowledge. Here’s how to check them yourself:
1. Check affordability headroom
Look up the capital city median house price (Domain publishes this quarterly). Compare it to the suburb’s median. If the suburb is priced below the city median, it has headroom. If it’s above, the backtest says it’s statistically less likely to outperform.
2. Look for the detection signature
Check annual growth (YIP or CoreLogic), days on market, and vacancy rate. If growth is above 5%, DOM is under 30 days, and vacancy is below 1.5%, you’re looking at a boom signature. The earlier in that cycle, the more upside remains.
3. Ignore everything else
Infrastructure announcements, population projections, building approval trends, rental yield comparisons — none of these survived backtesting as suburb-level predictors. They make interesting context. They don’t pick winners.
The hard part isn’t knowing the signals — it’s doing this across thousands of suburbs fortnightly, catching booms within weeks of starting, and filtering out noise from thin markets. That’s what we automate.
Full backtest methodology, the 78-suburb validation dataset, and the walk-forward tier discrimination results are published on our proof page. No gating, no email required. Check the maths yourself.
Join the Wishlist
We'll email you when BoomAU launches — starting with the budget range you care about.
Be first in line
- ✓Fortnightly Strong / Good / Fair / Weak signal labels per suburb
- ✓Filtered to your budget band
- ✓Built on a backtest of 12,360 postcode-months