Humans behave a lot like collections of filters. We constantly make expectations, then revise our expectations and take actions as a result of new information. We can think of our expectations as an collection of internal beliefs.
This process defines the “main loop” of the Bayesian Filter. Without descending into impenetrable formulas, we can think of a filter as a looped process that has expectations about a signal it is observing. This signal can be any stream of information. When a new update from the stream becomes available, the filter compares the signal against its expectations and adjusts its internal state based on how far off it was with the previous expectation. Then it waits for new info and repeats the process all over again.
One of the concepts that characterizes this process is often called the signal to noise ratio. While this concept can be explicitly defined in many ways, intuitively the concept remains the same across filters and humans:
The Signal-to-Noise Ratio (SNR) defines our relative preference for new versus old information. If the signal to noise is low, we are skeptical about what we observe and tend to place more weight on our own internal beliefs, even in the face of contradictory evidence. On the other hand, if the signal to noise is high we respond to new information as the truth and revise our internal beliefs to quickly match what we observe.
Of course, in practice the SNR can take values along a continuum. To make the analogy complete, I think its appropriate to think of humans as a collection of SNR’s which apply to different contexts, or “channels”. For example, an individual’s SNR is very low in channels like politics or religion. In that case, no matter what information that individual encounters they are likely to continue thinking whatever they believed before.
On the opposite end of the spectrum, we might call someone with a high SNR as highly suggestible. At dinner time my SNR on the food channel is very high: any suggestion (e.g. a tasty burger) is likely to align my internal state in that direction. My SNR on the political channel might remain low whilst my ratio on the food or “what-to-do” channel is high, especially if its the weekend.
I think this abstraction has value in a number of commercial pursuits. Something similar is probably already in use. My immediate thoughts turn to its utility for marketers, but I believe the concept has a multitude of uses for personal study (i.e. applicable to trading). Advertisements should be more valuable when shown to those with high SNR’s in applicable channels. How to measure the SNR for a given channel is another matter.
I would be very interested in measuring my own SNR space from a strategic perspective. I would imagine that the space would be dynamic through time. At certain times I am more open to new information, at others I shut out all outside information. I believe this internal time series would correlate with the external world in actionable ways, especially PnL.
You could implement this for yourself by tracking all your digital interactions, but that would require the design of an entire collection platform and probably not be worth it unless you shared it with everyone. In principle you could calculate your propensity to read, reshare, click-through, or otherwise interact with a new piece of information and use that as a proxy for your SNR space.
Facebook or Google could probably estimate SNR’s right now. The principle would remain the same, using your interaction rates / patterns as a proxy for the SNR space. The ubiquitous use of tagging on social media makes it possible to define channels with ease. Do you interact with information containing a ton of different political viewpoints? Your political SNR is high and you might be a swing voter. Suddenly your ad-blocker is choking to death on political ads.
Admittedly this is a half-baked idea, but its something I’ve been toying with for awhile. This is a first attempt at giving the idea shape and form in the external world. I would welcome feedback, comments, extensions, or even outright refutations 😉