What Quant Traders Actually Need to Get Right: Trends and Risk
A recent survey paper consolidates the evidence on market trend characteristics and risk management for quantitative strategies. The practical conclusion is straightforward: adaptive trend identification matters, but multi-layered risk controls are what keep you in the game.
Most writing about quantitative trading falls into one of two buckets: either breathless futurism about AI replacing every discretionary trader on the planet, or dry academic formalism that nobody with a live portfolio reads. A recent survey by Wang (2026), presented at the CONF-BPS 2026 Symposium, sits somewhere between these extremes. It consolidates the existing evidence on how market trends behave, how quant strategies identify them, and what a sensible risk management stack looks like. None of it is groundbreaking on its own, but the synthesis is useful, especially for anyone building or running systematic strategies.
Trends are not random walks, but they are not straight lines either
The paper organizes market trend behaviour into four characteristics that matter for strategy design, and each of them is well-supported by earlier research.
Persistence. Once a trend establishes itself, it tends to stick around. Equity momentum effects typically last 3 to 12 months, while forex and commodity trends can run longer because macro drivers move slowly (Jegadeesh and Titman, 1993). This is the entire theoretical basis for trend-following strategies. If trends did not persist, momentum would not work.
Cyclicality. Trends operate on multiple timescales simultaneously. A multi-month uptrend can contain week-long pullbacks and day-long reversals. The interaction between these cycles is what makes pure single-timeframe signals unreliable. Anyone who has traded a 20-day moving average crossover during a long-term bear market correction knows this firsthand.
Volatility clustering. High-volatility days cluster together, and so do low-volatility days. This pattern, formally described by GARCH models, means that risk is not evenly distributed through time. A strategy that sizes positions identically on a calm Tuesday in August and a chaotic Monday after a central bank surprise is doing it wrong.
Nonlinearity and structural breaks. Trends do not end gradually. They break. The 2008 financial crisis, the 2020 COVID crash, the 2022 rate shock: each of these produced sudden regime changes where historical relationships stopped working. The paper cites the 2010 Flash Crash as an example of how synchronized algorithmic selling can itself create a break, when the Dow fell 1,000 points in ten minutes before recovering (Kirilenko et al., 2017). Strategies built on the assumption that the recent past predicts the near future are structurally exposed to these events.
Identifying trends: the tools and their limits
Wang groups trend identification into three categories. Technical indicators like moving averages, ADX, and MACD provide clean, interpretable signals but struggle with lag and false signals in choppy markets. Statistical methods, particularly ARIMA and cointegration models, are better at filtering noise but rely on stationarity assumptions that real markets violate regularly. Machine learning approaches, especially LSTM networks and reinforcement learning, handle nonlinearity better but introduce overfitting risk and the "black box" problem: the model works until it does not, and you may not understand why.
The paper does not declare a winner among these categories, and that is probably the right call. In practice, most working quant systems combine elements from all three. The trend identification method matters less than the framework around it: how quickly the system recognizes when a signal has stopped working, and what it does next.
Risk management is not optional
The second half of the paper covers risk management, and this is where the practical value sits. Wang describes a multi-layered approach that integrates several standard but often poorly combined tools.
Value at Risk (VaR) estimates the maximum loss at a given confidence level over a holding period. It is the baseline metric every institutional risk desk uses, calculated via historical simulation, variance-covariance, or Monte Carlo methods (Jorion, 2014). Its weakness is well-known: it says nothing about how bad things get once you cross the threshold.
Expected Shortfall (ES) fills that gap by averaging the losses beyond the VaR threshold. If your 99% VaR says you will not lose more than 5% on 99 out of 100 days, ES tells you what the average loss looks like on that hundredth day. For strategies exposed to tail risk, and most momentum strategies are, ES is the more honest number.
Stress testing takes both metrics further by simulating specific extreme scenarios. What happens to a trend-following portfolio in a 30% crash? What about a sudden volatility spike with a simultaneous liquidity freeze? These scenarios are not probabilities. They are exercises in understanding what the portfolio cannot survive.
Dynamic position sizing adjusts exposure based on current volatility rather than using fixed allocations. When volatility clusters upward, position sizes shrink. When conditions calm down, they expand. Combined with trailing stops and volatility-adjusted exit levels, this prevents a single bad trade from doing outsized damage.
Diversification across strategies and asset classes remains the simplest and most reliable risk reduction tool. Mean-reversion and momentum strategies, for example, tend to have different failure modes: mean-reversion underperforms in trending markets, momentum crashes in reversals. Running both smooths the equity curve in ways that optimizing a single strategy cannot.
The paper's core argument is that none of these tools work well in isolation. A portfolio that uses VaR without stress testing, or position sizing without diversification, has gaps that tend to show up at exactly the worst time.
The challenges nobody has solved yet
Wang identifies three structural problems that the quant industry is still working through.
Alpha decay is the first. The more participants adopt a given strategy, the faster its edge erodes. Classic momentum signals, which were worth substantial excess returns in the 1990s, have been progressively arbitraged away as more capital flows into systematic trend-following.
The data and technology gap is the second. Large institutional players invest heavily in proprietary data sources and infrastructure, creating an uneven playing field. Smaller firms increasingly lack the resources to compete on speed, data quality, or model complexity.
Regulatory pressure is the third. Post-Flash Crash, regulators have tightened oversight of algorithmic trading, requiring greater transparency and more rigorous risk testing. Compliance costs add another burden, particularly for smaller operations.
The paper points to adaptive machine learning, alternative data integration (NLP, satellite imagery), ESG factor inclusion, and AI-driven risk prediction as future directions. These are reasonable bets, though each comes with its own set of unsolved problems. Adaptive ML systems can overfit to recent regimes. Alternative data is expensive and often short-lived as an alpha source. ESG integration is still methodologically immature. And AI risk prediction models inherit all the interpretability problems of the strategies they monitor.
What this means for systematic investing
This paper is a survey, not original research, so it does not produce new evidence. What it does well is consolidate the argument that strategy design and risk management are not separate activities. A momentum strategy without volatility-adjusted sizing is incomplete. A diversified portfolio without stress testing has blind spots. A trend-following system that cannot adapt to structural breaks will eventually blow up.
For anyone running systematic strategies, the practical takeaway is not any single technique from this paper. It is the architecture: layered risk controls, multiple identification methods, and the humility to design for the scenario where your model stops working. The strategies that survive are not necessarily the cleverest. They are the ones that fail gracefully.
References:
Jegadeesh, N. and Titman, S. (1993) 'Returns to Buying Winners and Selling Losers: Implications for Stock Market Efficiency', The Journal of Finance, 48(1), pp. 65-91.
Jorion, P. (2014) Value at Risk: The New Benchmark for Managing Financial Risk. 4th edn. New York: McGraw-Hill Education.
Kirilenko, A. et al. (2017) 'The Flash Crash: The Impact of High Frequency Trading on an Electronic Market', Journal of Financial Economics, 123(1), pp. 1-25.
Wang, Z. (2026) 'Quantitative Trading: Market Trend Analysis and Risk Management Strategies Research', Proceedings of CONF-BPS 2026 Symposium: GenAI, Labour Markets, and the Economics of Human and Financial Capital. DOI: 10.54254/2754-1169/2026.LD32014