Long ago we asked if emotions matter in algorithmic trading? The article showed how our automated strategies can be imprinted with artifacts from our own human perception. Today we want to present a set of links which detail some of the limits of human perception, as well as the danger of including too much information in your analysis.
Every successful trader knows the KISS principle; creating successful strategies is a game of making your models only just smart enough. For example, from a high level standpoint, it might make sense to implement a risk management algorithm in which your strategy varies the width of its market (the average difference between its bid and ask spread) in response to the overall volatility of the market. If you calibrate this process incorrectly, however, you could end up never getting a trade… or getting WAY too many (when you are making markets that means the market is picking off your orders, exposing you to damaging trade flow).
The point is this: when you want to add complexity to your trading strategy, make sure you have a damn good reason to do so!
The more parameters your strategy needs to function the greater its parameter risk: the risk stemming from incorrectly estimated parameters. This could be caused either from a bad model, or from the natural movement of parameters like correlation and volatility over time. Parameter risk should be used in selecting one trading model over another. A myopic focusing on PnL sharpe/sortino leaves a trader open to being blindsided by a change in conditions.
This brings us to the work of Gerd Gigerenzer, Director at the Max Planck Institute for Human Development and Director of the Harding Center for Risk Literacy in Berlin. His focus is on how humans perceive risk, often highlighting situations in which you can be harmed (or otherwise act in a suboptimal fashion) when you possess too much information.
While many of these links might not appear at first glance to be related to trading, we believe that it couldn’t be more apropos for the current and future designers of automated trading strategies to be in touch with the bounds of human rationality: ultimately these form the bounds of our algorithms.
Humans and animals make inferences about the world under limited time and knowledge. In contrast, many models of rational inference treat the mind as a Laplacean Demon, equipped with unlimited time, knowledge, and computational might….the authors have proposed a family of algorithms based on a simple psychological mechanism: one reason decision making. These fast and frugal algorithms violate fundamental tenets of classical rationality: They neither look up nor integrate all information. By computer simulation, the authors held a competition between the satisficing “Take The Best” algorithm and various “rational” inference procedures (e.g., multiple regression)…This result is an existence proof that cognitive mechanisms capable of successful performance in the real world do not need to sat isfy the classical norms of rational inference.
Research on people’s confidence in their general knowledge has to date produced two fairly stable effects, many inconsistent results, and no comprehensive theory. We propose such a comprehensive framework, the theory of probabilistic mental models (PMM theory). The theory (a) explains both the overconfidence effect (mean confidence is higher than percentage of answers correct) and the hard-easy effect (overconfidence increases with item difficulty) reported in the literature and (b) predicts conditions under which both effects appear, disappear, or invert. In addition, (c) it predicts a new phenomenon, the confidence-frequency effect, a systematic difference between a judgment of confidence in a single event (i.e., that any given answer is correct) and a judgment of the frequency of correct answers in the long run. Two experiments are reported that support PMM theory by confirming these predictions, and several apparent anomalies reported in the literature are explained and integrated into the present framework.
Is the mind, by design, predisposed against performing Bayesian inference? Previous research on base rate neglect suggests that the mind lacks the appropriate cognitive algorithms. However, any claim against the existence of an algorithm, Bayesian or otherwise, is impossible to evaluate unless one specifies the information format in which it is designed to operate. The authors show that Bayesian algorithms are computationally simpler in frequency formats than in the probability formats used in previous research. Frequency formats correspond to the sequential way information is acquired in natural sampling, from animal foraging to neural networks. By analyzing several thousand solutions to Bayesian problems, the authors found that when information was presented in frequency formats, statistically naive participants derived up to 50% of all inferences by Bayesian algorithms. Non-Bayesian algorithms included simple versions of Fisherian and Neyman-Pearsonian inference.
Michael Covel has two interesting podcasts that feature Mr. Gigerenzer that are worth a listen as well.
Want to learn how to mine social data sources like Google Trends, StockTwits, Twitter, and Estimize? Make sure to download our book Intro to Social Data for Traders