Share this Article & Support our Mission Alpha for Impact
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.

Machine Learning in the financial markets.

Post by 
Text Link
Machine learning is the latest fad in quantitative asset management, but it also causes a lot of confusion. Our background article is based on a recent paper on the subject and sheds light - without going into too much detail - on what is behind the term, the challenges involved in applying it to financial market data, and the areas for which it seems best suited.

What is Machine Learning?

Machine Learning is generally understood to be the idea that computer programs can improve themselves over time through learning effects. In terms of stock market trading, this means the next step in the evolution of statistical and quantitative methods. Although the influence of humans in the quantum business was already diminishing, decision makers continued to play an important role - for example, by first determining the model to be used at the beginning of quantitative analyses.

In machine learning, this human influence is again much less or, ideally, no longer exists. The algorithms can view many models in parallel and then decide for themselves which framework to use for further investigations.

For many established asset managers this step seems revolutionary. Accordingly, opinions diverge widely, ranging from skepticism, confusion and lack of understanding to the modern thinking that this enables progressive, better decision making.

A new word for old ideas?

In the paper "Can Machines 'Learn' Finance?", Ronen Israel, Bryan Kelly and Tobias Moskowitz write that the high range of opinions is related to the fact that machine learning is a rapidly developing but very technical field in which few market participants have a full understanding. That's why the term is sometimes used in practice in the way that suits it, for example for marketing purposes.

But how does machine learning differ from previous quantitative concepts? The paper names three criteria that the algorithms fulfill: [1]

Criteria of machine learning:

Application of "large" models with many parameters (features) and/or complex non-linear relationships between inputs and outputs with the aim of achieving maximum forecast quality on the basis of an unknown pricing model of the market
Select a preferred model from any number of different models, taking into account regulatory techniques to limit granularity and cross-validation methods with simulated out-of-sample tests to avoid over-optimization
Innovative approaches to efficient model optimization that reduce the computational effort in big-data environments, such as Stochastic Gradient Descent, which considers only random parts of the data without a large loss of accuracy

The authors write that ideally "Goldilocks models" are used as input for machine learning. These are large enough to reliably identify the real, potentially complex relationships with predictive character in the data. At the same time, they are not flexible enough to generate over-optimization on historical data, which would lead to disappointing performance out of sample.

When does Machine Learning work?

According to the paper, the success stories of machine learning in areas such as speech recognition, strategic games and robotics to date come from environments that combine two critical factors: [1]

Big data environment: Machine learning algorithms benefit from a massive flood of training data. For example, AlexNet, a neural network for image recognition, has around 61 million parameters. This makes it possible - assuming sufficient computing power is available - to map highly complex relationships between different patterns.
Signal-to-noise ratio: This ratio describes the degree of predictability within a system. The higher the value, the better. For example, high ratios are often achievable in the field of image recognition, which leads to stable, reliable results of the corresponding algorithms.
It is far easier to beat a grandmaster in chess with Machine Learning than to forecast the prices on the stock exchange even remotely well.
Israel, R. / Kelly, B. / Moskowitz, T. (2020), Can Machines „Learn“ Finance?

The problems with financial market data

It is far easier to beat a grandmaster in chess with Machine Learning than to forecast the prices on the stock exchange even remotely well. According to the authors of the paper, this is due to the fact that different conditions exist in the financial sector: [1]

Conditions in the financial sector:

Small data environment: The amount of data in the financial markets is only apparently large. In fact, the forecast variables from which algorithms can learn are limited by the limited number of reliably measurable parameters - namely the observed returns. For example, monthly data for entire asset classes only yield a few hundred data points each. This is a tiny data set that only allows models with a handful of variables if they are to have a certain stability. Even for the analysis of individual stocks, where the available universe can contain hundreds of thousands of data points, it is a small data set from a machine learning perspective. In addition, the effective number of observations is much smaller due to cross-sectional correlations between individual stocks. The problem here is that we cannot, as in other areas, generate an unlimited ammount of new data by experiment to expand the training data set. Therefore, yield forecasts will still be a small-data problem in 100 years' time, no matter how advanced the methods are. One exception is high frequency trading. Due to the extremely short time periods, the data frequency is very high here, which allows the application of much higher parameterized models. However, the corresponding strategies have a very limited capacity, which is why this niche is fought by a few, highly specialized players.
machine learning_01
Figure 1) Number of components to explain 80+ percent yield variation
The graph shows the number of Principal Components necessary to explain at least 80 percent of the return covariance of the monthly size and value portfolios according to Fama/French.
Source: Israel, R. / Kelly, B. / Moskowitz, T. (2020), Can Machines "Learn" Finance?, p. 8

Little signal, a lot of noise: The second important factor is no better, because price movements in the markets are almost efficient. This is a direct consequence of the continuous input of new information by all the players involved, which greatly reduces (but does not completely erase) the remaining predictability, especially in the short-term. Therefore, the volatility of returns is considerably higher than the risk premiums or the expected returns themselves. The high proportion of "noise" is due to the fact that, with almost efficient prices, it is mainly unexpected news and shocks that move the market. The authors write that although the signal-to-noise ratio improves over longer time horizons, this in turn reduces the number of available data points and increases the variance of forecast errors. Therefore, it is a matter of finding a suitable compromise that allows both a certain signal strength and a sufficient number of observations.
machine learning signal strength vs no. of observations
Figure 2) Trade-Off between signal strength and number of observations
Using regressions, the classical CAPE was used to explain the returns of the broad CRSP data set weighted by market capitalization. The coefficient of determination (bar, right axis) clearly shows that the significance increases with a longer time horizon. However, the number of observations is reduced considerably (line, left axis), which leads to a correspondingly higher variance of the forecast errors.
Source: Israel, R. / Kelly, B. / Moskowitz, T. (2020), Can Machines "Learn" Finance?, p. 10

Further challenges

But small data and the poor signal-to-noise ratio are not the only problems that machine learning users in the financial sector have to deal with. The study by Israel / Kelly / Moskowitz identifies three additional challenges that make implementation difficult: [1]

Evolution of markets: Capital markets are dynamic and continue to evolve as players adapt to new conditions. In particular, signals with a certain predictive power often spread quickly, which usually reduces their informative value permanently or makes them disappear completely. In addition, technical progress also influences very fundamental relationships such as the interaction of people in the markets. This is why financial markets are more complex than most other application areas for machine learning. The authors illustrate this with an apt comparison to image recognition: "Cats don't start turning into dogs as soon as the algorithm becomes too good at recognizing them in photos".
Unstructured data: Classical quantitative financial market data is available in a well-prepared form, but is largely "exploited". In contrast, many new, interesting, alternative data are unstructured, for example as a mix of text, graphics and videos. In addition, the histories are comparatively short, for example in the case of data from social media or on geolocation.
Need for interpretation: Some machine learning models are black boxes, where it is difficult to derive interpretations of the learning mechanisms at all. However, this understanding is an important requirement in asset management - even with high forecast quality - in order to be able to clearly identify risks to customers and regulatory authorities. There are similar challenges in other sensitive areas such as medicine.
Machine Learning has the potential for significant progress in the field of quantitative investments. According to the authors of the study under consideration, however, this is not a real revolution, but rather the consistent further development and automation of already applied statistical and quantitative methods.
Israel, R. / Kelly, B. / Moskowitz, T. (2020), Can Machines „Learn“ Finance?

Machine Learning - Possible applications

The paper points out that the black box problem can be avoided by structural modeling. Machine learning is implemented within a superordinate, theoretically sound (economic) model accepted by human experts, which allows a certain interpretation of the output. Within the defined framework, the algorithms can then act freely and identify correlations that represent potential added value.

Another paper, "Empirical Asset Pricing via Machine Learning", points out that machine learning offers added value, especially when using complex models such as neural networks and regression trees. 2] The improvements that can be achieved with machine learning are mainly due to the non-linear relationships that are not recognized by simpler models. At the same time, however, there is a suspicion that complex models in particular discover a prediction quality in the data that is associated with high transaction costs - which at the same time could explain why this potential source of returns has not yet been exploited by other market participants.

Another way to use machine learning is to focus on better risk assessment - especially covariances between stocks or factors - instead of returns. In this way, it would also be possible to achieve better risk-adjusted returns out of sample compared to a classic benchmark. Perhaps this is a more realistic application for machine learning, although these improvements are somewhat overshadowed by the "Holy Grail" of a true alpha strategy.

According to the authors, another possible application is transaction cost management. Usually there is sufficient data available in this area to allow the use of well calibrated models.

Conclusion

Machine Learning has the potential for significant progress in the field of quantitative investments. According to the authors of the study under consideration, however, this is not a real revolution, but rather the consistent further development and automation of already applied statistical and quantitative methods - in the sense of potentially faster, better and more flexible modeling. At the same time, the authors use the small-data environment as well as the signal-to-noise ratio to point out that the expectations for what the method can actually achieve should not be too high.

Structural modeling seems to be a suitable concept for implementing machine learning. Here, the method is implemented as described within a given framework - such as a simple, general factor model - and, if necessary, with the involvement of human expertise. This already indicates that humans will not be completely absent from the quant business in the foreseeable future.

[1] Israel, R. / Kelly, B. / Moskowitz, T. (2020), Can Machines „Learn“ Finance?, Yale University und AQR Capital Management
[2] Gu, S. / Kelly, B. / Xiu, D. (2020), Empirical Asset Pricing via Machine Learning. Review of Financial Studies, Vol. 33, Nr. 5, S. 2223-2273

Would you like to use this article - in full or in part - for your purposes? Then please consider the following Creative Commons-Lizenz.