Particle Swarm Optimization

In general, traders like ecological analogies for the markets. Traders live in a world where they eat what they kill. Strategies live, die and evolve in an almost biological sense. Successful strategies breed like animals as information leaks from desk to desk. Market crashes are described as cleansing fires, clearing out the deadwood and making way for upstarts to take their place.

Thus it should come as no surprise that one of our favorite optimization routines was inspired by watching a flock of birds.

The basic idea is iterative and simple. Suppose you have a strategy with two parameters, such as Hello, Market Maker!

You want to offload the process of setting these values to an algorithm because, as a human, your spirit is willing but your brain is spongy and bruised from years of trading the markets. Ideally, you want your algorithm to pick the best strategy. How can we do this?

The first step, of course, is to explicitly define what “best” means. To be simple, let’s define the best strategy as the one who makes the most money over our training dataset.

One way to accomplish this is simply to try every possible combination of increment and max position until you find the strategy with the best PnL. This is known as brute force search, and is only feasible if your parameter space is small and your computational power is cheap.

In most practical examples, brute force is out of the question, either 1) the parameter space is too large or 2) the objective function takes too long to execute. In our example, calculating the PnL of an individual strategy takes a nontrivial amount of time to complete. Thus we must search for more clever optimization techniques.

This is where the birds come into the picture. (Way) Back in 1995 James Kennedy and Russell Eberhart published their original paper on the subject, appropriately titled:

Particle Swarm Optimization [CiteSeerX]

A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described.

Intuitively, imagine our search space as a 2-dimensional grid, where the x axis represents the first parameter and the y the second parameter. The PSO algorithm will begin by initializing a random field of points (known as particles) within this search space. It will then evaluate the objective function for each particle and update the position of the particle in the search space according to system of transition equations:

  • x_t = x_t-1 + 2 * rand() * (x_best – x_t-1) + 2 * rand() * (x_local – x_t-1) , and
  • y_t = y_t-1 + 2 * rand() * (y_best – y_t-1) + 2 * rand() * (y_local – y_t-1)

where

  • (x_t, y_t) is the current particle position
  • rand() is a uniform random variable between 0 and 1
  • (x_best, y_best) is the group of particles’ shared memory of their best global particle position. In other words, this particle is the best set of parameters we have encountered so far.
  • (x_local, y_local) is each particle’s own memory of the best particle position

Put simply, each iteration we:

  1. Evaluate the objective function for each particle
  2. Find the best particle for the iteration and compare against (x_best, y_best), updating if needed
  3. For each particle, compare the output of the objective function against (x_local, y_local), updating if needed
  4. Move each particle in a random way towards a blend of its global and local best positions.

We can visualize it like so:

Python has a module for performing PSO called pyswarm, which provides a simple interface for performing this optimization technique. As of right now it is single threaded, and so it can take a long time to run to completion.

If there is decent interest in this method of optimization, we will take the time to open source some PSO code in Python which utilizes multiple cores in a cluster environment which we have found useful for parameter optimization in the real world.


Interested in resources for Pairs Trading?

Check out the Beta release of SliceMatrix: a unique tool for visualizing the stock market, including views of filtered correlation networks and minimum spanning trees


Want to learn how to mine social data sources like Google Trends, StockTwits, Twitter, and Estimize? Make sure to download our book Intro to Social Data for Traders


Enter your email address to follow this blog and receive notifications of new posts by email.


follow_st


Correction: We made a mistake in our post Hello, Market Maker! We accidentally swapped two prices which understated the effect of the drawdown. We have since rectified the error and pushed a correct version of Runover to github


Lead image licensed from Tony Hisgett under CC BY 2.0

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s