Important: There is no guarantee that these strategies will have the same performance in the future. I use backtests to compare historical strategy performance. Backtests are based on historical data, not real-time data so the results shared are hypothetical, not real. Even with forward tests, there is no guarantee that performance will continue in the future. Trading futures is extremely risky. If you trade futures live, be prepared to lose your entire account. I recommend using these strategies in simulated trading until you/we find the holy grail of trade strategy.
Housekeeping:
First, I want to say thank you to everyone for all the well wishes. All subscribers have received one month (not 2 weeks as previously stated) added to your subscription due to the delay. If you did not receive the comped subscription, please let me know.
Second, this will be the last strategy published through ATS. The focus going forward will be on research and improving the strategies that have already been published. That research will be shared through ATS Research and applied to strategies within ATS. To read more about the transition, click here.
Please let me know if you have any questions.
Gentlemen should not waste their time on trivial games -- they should study Go.
-- Confucius, The Analects, ca. 500 B.C.E.
The first game mastered by AI was tick-tack-toe in 1952. Checkers followed in 1994. Then in 1997 Deep Blue beat Garry Kasparov at chess. Next in line was the game Go.
The game of Go originated in China over 2,500 years ago. The rules of the game are deceptively simple: place black or white stones on the board to capture the opponent's stones or surround empty space to create territory.
Described as requiring more intuition and “feel” than the game of chess, it was considered the pinnacle or ‘holy grail’ of AI research to create an algorithm that could beat a human at the game of Go. With more possible moves than the number of atoms in the universe, this challenge required a more exhaustive approach.
Traditional AI methods—which construct a search tree over all possible positions—don’t have a chance in Go. So when we set out to crack Go, we took a different approach. We built a system, AlphaGo, that combines an advanced tree search with deep neural networks. These neural networks take a description of the Go board as an input and process it through 12 different network layers containing millions of neuron-like connections.
The team at Google trained the system on 30 million moves from games played by humans. They did this until it could predict the ‘human move’ 57% of the time. The next step was training on ‘itself’ in a trial-and-error process known as ‘reinforcement learning’.
We created AlphaGo, an AI system that combines deep neural networks with advanced search algorithms. One neural network — known as the “policy network” — selects the next move to play. The other neural network — the “value network” — predicts the winner of the game.
Both programs are constantly evolving based on the last move. In the same way that conditionals are used to define the ‘policy framework’ for automated trading strategies, and statistics are used to define the value of those conditionals, AlphaGo was created to use a conditional set based on ‘deep’ knowledge and a prediction model that evaluates and applies that knowledge.
Finally, AlphaGo was ready for a live game against a human. The team at Google invited Fan Hui, the reigning three-time European Go champion, to Google’s London office for a challenge match. Here’s a video from that event:
The two played 5 games. AlphaGo won all 5 games. The results were subsequently published in Nature.
Next up was the legendary match in Seoul with Lee Sedol—the top Go player in the world at the time. Another five games were played, but AlphaGo did not win all five games this time. As the story goes Sedol played “Move 78” (aka God’s Touch) in game four. The move had a 1 in 10,000 chance of being played and allowed Sedol to win the game, but that’s not the full story. What’s often left out is “Move 37” in game two.
“Move 37” was played by AlphaGo in game two. It also had a 1 in 10,000 chance of being used. The move was described as being not only pivotal, but creative. Could it be that AlphaGo’s use of Move 37 in Game 2 inspired Sedol’s use of “God’s Touch” in Game 4?
I believe the answer is yes because AI has improved my own chess rating. Every morning I play an AI called ‘Magnus’. It is based on the Norwegian chess Grandmaster Magnus Carlsen. His peak rating of 2882 is the highest in history.
I hate Magnus (the bot, not the person). It is impossible to beat and says the most insufferable things, but my game-play improves every time I play against it. I wonder if we can use the same logic to help us on our hunt? Perhaps Ninjatrader’s AI generative tool (AI Generate) can help us to Frankenstein our very own holy grail?
To that end, I’ve been forward testing AI generated strategies since January 1 of 2024. If I look at which strategies have performed the best, there IS a discernible pattern. In particular, the use of one tool/technique that I’ve never used in any of the 80+ strategies published prior to this one. I wondered if I could use the same technique to improve all strategies in the ATS portfolio. Certainly, I was going to use it to create Strategy 82.
These are the backtest/optimization results of Strategy 82 over the last year:
My apologies for the small font. Please click on the picture to enlarge. Note to subscribers, please scroll down to view the parameters used to achieve these results.
This Strategy 82 portfolio has a total profit factor of 5.12 and a net profit of $112K on 371 trades, but what I like most about this strategy is the ultra-low MAE across instruments, and on both short and long positions. This has become a desirable attribute after a few bad trades in the Live Test. These ‘bad trades’ were caused by cancelled orders due to overlapping trade signals on the same instrument (NQ). As much as I like NQ for its volatility, it can’t be used to run multiple strategies. And, as much as I like volatility, I’m no a fan of whipsaw prices — unless I can take advantage of it. The challenge is finding a strategy that can perform well across multiple instruments AND help to reduce this whipsaw effect by taking profits often. What I’ve noticed is that certain AI generated strategies do this far better than others. These strategies all have one thing in common and I used this attribute to create Strategy 82.
Strategy 82 Description, Command Structure & Download (C#): Inspired by AI
Strategy 82 uses an indicator AND an attribute that I’ve never used before. They are: