[inspired by true events]

We have all seen it happen before: **two different outcomes following from with the same input**. Einstein would have you committed for insanity but traders know this can happen all the time.

It can happen in the physical world too, consider the following situation: you are driving your automobile, stopped at a red light with a single car ahead of you. The light turns green and the car ahead of you remains stationary. You wait a few seconds for the car to move on its own then *honk your horn *to get things moving. **Then one of two outcomes occur:**

- The driver ahead begins to move
- The driver ahead puts the car in park, exits the car, points at you, and challenges you to a fight

**What’s the difference between the two situations**. To answer this we have to look at what other information we have at our disposal to create a more refined look at probability. In the first example, suppose you are driving a staid blue Toyota sedan. In the second set situation you are driving a bright red German sports car.

If we were able to look at every horn-honk event that occurred on American motorways we could generate what is known as the *unconditional* probability that a honk leads to an altercation. Suppose this probability is 0.1%, i.e. 1 out of every 1000 people anyone honks at will try to start a fight. But **we can use the eponymous Bayes’ Theorem to compute what is known as a the conditional probability of an altercation:**

Where** P ( A | B )** means the probability of the occurrence of **A** *given* the co-occurrence of **B**. In our example, **A** refers to the outcome where the driver tries to fight you, while **B** is an extra bit of information like the fact we were driving a red car. To calculate this probability we need to look at another conditional probability, **P(B|A)**, and the two unconditional probabilities of outcomes **A** and **B**. Thus, to compute the probability that a fight will occur when you honk your horn while driving your red car, we need to find 1) the probability (given the existence of a fight) that the defender’s car was red, 2) the probability that any car starts a fight when honked at, and 3) the probability of any car being red.

Thus conditional probability offers a more focused look at our sample space, as we restrict our space to outcomes which occur subject to a certain set of criteria. For example, suppose we can calculate that the probability of a honk-induced altercation when you are driving the blue Toyota is 0.05%, or 1 out of every 2000 people you honk at when driving the Toyota will throw their car into park and comically walk towards your vehicle with an angry look in their eyes. **Now suppose that we are able to calculate the probability of altercation when you’re driving the candy red sports car**, its 1%, or 1 out of every 100 people tries to pick a fight when you honk.

As traders we often find ourselves in the role of probabilists, we want to obtain the a priori probability distribution for an event. **We compute conditional probabilities oftentimes without realizing it** or making the term explicit. Some examples include Insuring tomorrow’s decline, today which looked at the distribution of returns conditional on the occurrence of two events in $SPX and $VIX, and Predicting a stock’s daily trading volume from the market open, which looked a the distribution of total daily volume conditional on the first period’s trading volume.

Check out the Beta release of SliceMatrix: a unique tool for visualizing the stock market, including views of filtered correlation networks and minimum spanning trees

**Want to learn how to mine social data sources** like Google Trends, StockTwits, Twitter, and Estimize? Make sure to download our book Intro to Social Data for Traders

Follow @MKTSTKBLG

Lead image licensed under CC BY 2.0 from Calm Vistas

Categories: Probability