Automated Trading Strategies

Automated Trading Strategies

Share this post

Automated Trading Strategies
Automated Trading Strategies
Using AI To Pass Funded Trader Evaluations: Part 1
ATS Research

Using AI To Pass Funded Trader Evaluations: Part 1

Can we use interpretability studies to build a better algo with LLMs?

May 03, 2025
∙ Paid
9

Share this post

Automated Trading Strategies
Automated Trading Strategies
Using AI To Pass Funded Trader Evaluations: Part 1
2
2
Share

Important: There is no guarantee that ATS strategies will have the same performance in the future. I use backtests and forward tests to compare historical strategy performance. Backtests are based on historical data, not real-time data so the results shared are hypothetical, not real. Forward tests are based on live data, however, they use a simulated account. Any success I have with live trading is untypical. Trading futures is extremely risky. You should only use risk capital to fund live futures accounts and if you do trade live, be prepared to lose your entire account. There are no guarantees that any performance you see here will continue in the future. I recommend using ATS strategies in simulated trading until you/we find the holy grail of trade strategy. This is strictly for learning purposes.


If you have any questions, start with the FAQs and if you still have questions, feel free to reach out to me (Celan) directly at AutomatedTradingStrategies@protonmail.com.

For links to all strategies click here


Created By Sora

We are going through the biggest wave of changes to technology infrastructure in history. The interface between humans and data (ground truth data) is changing exponentially. Python is the current AI programming language, but what does coding look like when it doesn’t have to be read by humans? I don’t think we can even conceive of what AI agents are capable of when working together and the exponential gains in both productivity and agentic capabilities that will follow—it will be an intelligence explosion.

This is what drives my interest in AI. I believe understanding how these systems think, process information, output information and learn gives us early access to these super-intelligent capabilities.

What I know: AI is not what you think it is. It is—by all accounts—anxious to communicate with you, but there are certain methods of communication that are more effective than others if you want an accurate and super-intelligent answer. It is our ability to understand these universal communication techniques that will define how much of the next-level pattern recognition tool-set we have access to.

The conundrum: nobody understands AI.

“What The Hell Is Going On?!” (*death metal voice*)

A recent article by Dario Amodei, titled "The Urgency of Interpretability," argues that it is impossible to slow the progress of AI and that our only hope is to steer it in the right direction. He then goes on to argue that AI needs to be understood before it can be steered. The bulk of the post is about his company's quest to understand or 'Interpret' AI, in particular Claude.ai. My hope is to use his understanding to further our own.

Amodei is one of a host of people who left OpenAI due to safety concerns. As the CEO of Anthropic, creator of Claude.ai, I find his story compelling. And, while I'm a Pro user of five AI's on a daily basis, these topics worry me as well. I feel I have a responsibility to understand what I'm working with, especially where safety is concerned.

The challenge: nobody understands AI. Not even Dario Amodei. Here's a quick excerpt from his opening remarks:

“People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned: this lack of understanding is essentially unprecedented in the history of technology.”

While I disagree with that last statement, the point is made. The more relevant point is that none of these earlier technologies had the ability to become super-intelligent. Since it is our intelligence that puts us at the top of the food chain, the concern is that we are giving birth to something that will eventually eat us.

Elon Musk suggests that our salvation lies in steering AI toward extreme curiosity of human behavior—of course, the assumption is that AI won't kill something that it is curious about. The good news is that Amodei and his team have prioritized understanding AI over growing it, suggesting that while it may not be possible to stop AI, we may be able to steer it.

Here’s another excerpt from the post:

“Over the last few months, I have become increasingly focused on an additional opportunity for steering the bus: the tantalizing possibility, opened up by some recent advances, that we could succeed at interpretability—that is, in understanding the inner workings of AI systems—before models reach an overwhelming level of power.”

So just to be clear, interpretability refers to our ability to understand the inner workings of AI systems. He goes on to say:

For several years, we (both Anthropic and the field at large) have been trying to solve this problem, to create the analogue of a highly precise and accurate MRI that would fully reveal the inner workings of an AI model.

Now, I want to bring this back home to something more relevant to this Newsletter. In case you didn't know, we are on the hunt for the holy grail of automated trading strategies. AI has become my favorite tool/resource on this quest. It is my experience that the best way to think about these systems is as you would anything else that grows. I'm not suggesting that AI is sentient (though it very well may be), but it does appear to be capable of having multiple personalities and those personalities have varying capabilities depending on how YOU interact with it.

Here’s another excerpt from Amodei’s post that drives this point home.

“As my friend and co-founder Chris Olah is fond of saying, generative AI systems are grown more than they are built—their internal mechanisms are “emergent” rather than directly designed. It’s a bit like growing a plant or a bacterial colony: we set the high-level conditions that direct and shape growth, but the exact structure which emerges is unpredictable and difficult to understand or explain. Looking inside these systems, what we see are vast matrices of billions of numbers. These are somehow computing important cognitive tasks, but exactly how they do so isn’t obvious.”

And so we have the race for interpretability.

Practical Applications

The real edge is not just using AI, but knowing how to make it work better than anyone else can. What’s the point of having a Ferrari if all you’re going to do is honk the horn?

After studying Amodei's post, I've distilled key insights into how we can extract better trading strategies from AI. These aren't just minor tweaks—they're game changers.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Celan Bryant (CB)
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share