There is no guarantee that our strategies will have the same performance in the future. Some may perform worse and some may perform better. We use backtests to compare historical strategy performance, but there are no guarantees that this performance will continue in the future. Trading futures is extremely risky. If you trade futures live, be prepared to lose your entire account. We recommend using our strategies in simulated trading until you/we find the holy grail of trade strategy.
This is the 20th post in a weekly series called the Mudder Report. The report is dedicated to tracking our automated strategies at the weekly level. We’re taking a break from the original goal of the report to conduct a backtest audit. To read more about the process and framework for evaluation, click here.
Slippage
In the last Mudder Report I discussed the concept of slippage. If you think of a backtest as a kind of forecast, slippage tells you how close your ‘actuals’ were to the forecast. It is especially prevalent when:
there are a high number of average daily trades per day
the average time in the market is very low (less than 15 minutes)
there’s a high degree of volatility
you are trading more than 2 to 3 contracts per trade
average trade net income is less than $20.
Some traders think all slippage is bad, but in the same way that a forecast variance can be good or bad, so can slippage. Favorable slippage is when the actual trade performed better than the forecast, likewise, unfavorable slippage is when the actual trade performed worse.
Some of you have told me that I should account for slippage in our strategies by adding 2-4 ticks to every backtest, but we believe this is the wrong approach because every strategy is different. Adding 2-4 ticks of slippage to each strategy may give you a degree of error to operate within, but it doesn’t really tell you anything about how the strategy will perform live.
Our goal over the next two months is to conduct an audit on each strategy. Each strategy has its own ‘variance-to-actual’ or slippage. At the end of this audit each strategy will have its own backtest risk score and that score will be a function of slippage. The lower the score, the better.
The best score is 1, which means actual trades are always the same or better than the backtest. By contrast, a strategy with a score of 10 means the backtest is misleading and actual results displayed high negative slippage. For example, a strategy with a 1.50 profit factor in the backtest and a .86 in actual live trading would have a score of 10.
Our hope is that adding another dimension to the testing process will allow us to narrow our focus on those strategies with the highest potential. In the same way that provenance can tell you if you’re looking at a true work of art or a forgery, these audits can tell us if the intrinsic value of our strategies is real or fake.
Click here for a link to all strategies.
What’s the best way to approach this going forward?
The audit is based on the degree of positive or negative slippage for each strategy. We are currently in the process of running our automated trade strategies on three different platforms:
the NT8 backtest engine
a simulated live brokerage account run on a virtual server located in Chicago
Collective2, a third party website for copy-trading
The more favorable the price discrepancy is between all three, the lower the backtest risk and therefore score. Likewise, the more favorable the Collective2 and simulated live results are over the backtest, the lower the backtest risk and therefore score. In other words, a score of 1 is indicative of a strategy with minimal or favorable backtest risk. A score of 10 is indicative of a strategy with highly unfavorable backtest risk.
I provided an overview of our audit process in the last Mudder Report so I won’t review that process here, but hope to share the results of the full audit in the May update that goes out to everyone. Meanwhile, these are the results for the first eight backtest audits: