AI Portfolio Management, Exclusive Interview with Bert Mouler

As a principal researcher and chief designer of the intelligent portfolio management software used to manage funds at Profluent Capital, Bert Mouler has created a business model that is founded completely on algorithmic automation. Bert and his team engage in a variety of strategies including statistical arbitrage of stocks, market making in bitcoin (BTC) and other cryptocurrencies, directional trading in global trading and much more. Bert is one of the few that developed a completely integrated artificially intelligent portfolio management system.

Bert shared with us some insights on the algorithmic trading field in an interview covering system validation, backtesting, and more. The insight we gathered is beneficial to both the experienced algo trader and those wanting to enter into the algo trading. Be sure to check out his twitter page @BMouler and http://www.profluentcapital.com/

Kurt: In terms of validating news, how does the system discern credible information from fake news?

Bert: That’s an interesting topic. What is fake news? Is it news that’s not true or news that doesn’t move the market?  There could be a piece of news that’s not true but it may still move the market. So in my mind, that’s not fake news. In my mind as a regular non-trading human, that is fake news but since our goal is to make money, it’s real news. It’s a signal that correlates with a future directional movement in the price or volatility of whatever we are trading. In this regard, the machine learning framework that we use has the ability to discern “fake” news.

What is fake news? Is it news that’s not true or news that doesn’t move the market?

Kurt: So what’s the process that you do to validate a successful trading system?

Bert: Obviously, some of it is proprietary but I can share some of it. We do everything from adding noise to data, to Monte Carlo simulation, to testing the systems on different data. Let’s say, in terms of correlation, if you’re doing equities, you could use equities in the same sector to validate or you can just use the S&P or some other similar asset. Basically, that would get you more data. Right? That’s the whole point; more new data, more stuff you haven’t seen.  Unfortunately, the hidden problem in that is that people think that these correlations persist indefinitely. It’s likely that good results on one time series are going to produce good results on another even though the algorithm is overfit. Then, in the future, they’re not actually going to produce those results. To deal with that (overfitting in general), we try to minimize the complexity of the algorithms we use. Our best strategies have two parameters and a thousand trades out-of-sample or even ten thousand trades out-of-sample.  When you have two parameters and ten thousand trades out-of-sample, it’s very hard to overfit and you have very high statistical significance. Not many people are comfortable doing exactly what we do which is to say, “if statistically it works, we should trade it”. In my experience, people argue that they should understand what it is that they’re trading and people believe that they understand, which is a huge problem. In fact, not only do they not understand, but there are shocks that can occur from government regulation or new technology that can break down these relationships that everyone “knows” (which, obviously, they don’t). Let’s say everyone knows that gold and silver are interrelated and they trade the two as a pair against each other. Then, silver drops 80% while gold doesn’t move, sending the spread to historic lows. How can that happen? Maybe someone invents a new extraction or purification technology. Maybe a silver meteor arrives from outer space. That would really change the dynamics of the relationship of silver to gold. Right?

Kurt: So do you think maybe an issue is people trying too hard to understand what really moves these markets and being right as opposed to just using the validation they see and trading off that?

Bert: Absolutely, there’s no being right. The reason the price fell was because there was more supply than demand and it’s not because “Trump said” and it’s not because “Yellen said”. Thinking anything other than that is a mistake because there’s no way for you to know the true causal chain as a human being (some would argue it intrinsically unknowable). You’re making a series of very very strong and specific assumptions when you ask questions like “why did this happen to the market price”. You’re assuming access (or the possibility of access) to information you don’t have which is how you lose money in general. We make our money by being closer to reality than other people. Other people are making extra assumptions like US stock market is going to go up forever or Ethereum and Bitcoin are going to be cointegrated because they’re both cryptos. First of all, everything can and will change; you don’t know the relationship that you’re trading on holds true into the future. There are a million narratives that explain the phenomena that you’re seeing and there is no information or methodology that would enable you to pick one of those reasons and say that that’s really the truth. I mean, it’s literally impossible because to know why the market price did what it did, you basically have to know what was in the mind of every market participant, which is impossible.

First of all, everything can and will change; you don’t know the relationship that you’re trading on holds true into the future.

Kurt: Let’s say we have a trader who created an algorithm and they’re happy with the results they receive from the testing. Is it wise for them to put this out on the market with the risk of lack of diversification?

Bert: So there are several parts to it in my opinion. First off, I want to say that in my experience most people are looking for machine learning methods for alpha generation when instead they should be looking for methods of validation. You can get whatever results you want from a backtest. On the other hand, if you know that your 0.1 Sharpe on the backtest on the in-sample is definitely going to be the same 0.1 Sharpe on the out-of-sample, then you’re in a good place. That is much more important than having a really high Sharpe (or whatever your objective function is) on the backtest. If you can combine a hundred or a thousand or ten thousand different systems with a low Sharpe, you can get a very high Sharpe portfolio. You’ve got to think of the inefficiencies; there are all these people like me using advanced technologies, advanced techniques and lots of computing power to search the entire solution space of possible strategies.  It is obviously harder to develop strategies that are higher Sharpe (assuming that Sharpe is correlated with prediction power, not just frequency). However, if you do happen to develop and properly validate your high Sharpe strategy, it is the more likely to actually perform well in the future because of the gradual performance degradation that exists in general (as inefficiencies get exploited).  This is actually the reason why we don’t really use continuous indicators. We have found performance degradation to be much less significant when discrete patterns are used. And to get back to your question of whether they should unleash the system on the market: basically, it all comes down to risk. Many people say this is risky or this is not risky. Forex is risky and options are risky or futures are risky because they have leverage. The risk is not about the instrument, the risk is about your bet size.  If a Forex pair moves up and down a thousand percent but I bet one millionth of a percent of my portfolio on it then there’s not going to be that much of an issue. But if it goes up and down one percent and I bet one thousand times my portfolio on it then it is going to be a huge issue. Also, you can’t make assumptions about correlations/relationships; people think they are going to exist forever and it doesn’t work like that.  We underestimate the severity and the frequency of changes in the fundamental dynamics and structure of our reality.

Kurt: Let’s say that we have a trader who is running a single system on the market and let’s say an event such as Brexit occurs.  What advice do you have for the newer traders that are just implementing algos in the market?

Bert: Again, this goes back to the validation framework. We don’t trade anything that doesn’t perform on almost all the data that we have unless we have some really good reason for that not to be the case. Basically, if the trader validated the system on three months of data, I wouldn’t trade that at all in general but definitely would be wary of trading it through a crisis moment or potential high volatility moment. Even without that, the liquidity assumptions that they’re probably holding while they’re doing the back test are very likely to not hold during these high volatility events. Having said that, we trade throughout all the crises. We never turn our trades off. In futures, for example, we don’t count a trade executed in the backtest until price crosses one tick. In Forex, it’s one minimal spread plus one standard deviation or something like that. If your execution assumptions are solid: for example, if 80% of the liquidity in the market was removed and the system would still perform OK, then I would keep on trading it. We run systems that have three parameters and they’ve worked for the past twenty years; there’s no reason to turn them off. If your average trade is one hundred dollars let’s say on a one hundred thousand dollar account and you trade at least once a week then feel free to turn it off. You can make a lot but you can also lose a lot so you’ll be okay to skip the trade. If you trade once a year, obviously, you can’t skip that trade. It all comes down to the fact that anything can happen and you must estimate your risk (then double your estimate). You have to get back to the global truth and consider the risk and your worst case scenario. Don’t think about a stop because a stop can get blown through very heavily. A lot of things can happen that are unforeseeable. Basically, the question is: you don’t know what you don’t know so are you prepared to risk X to make Y?

We run systems that have three parameters and they’ve worked for the past twenty years.

Kurt: What advice would you have for traders like us who are really interested in getting involved in this area (algo and AI)?

Bert: Are you already trading profitably? If you’re not trading profitably, you should invest in CDs and government bonds. If you’re already trading profitably, then start by getting on Coursera and learning about machine learning and picking up as many books as possible.  You need to approach it backwards: you need to see what algo traders are doing and use machine learning to emulate that. Because again, the goal of is to make money and not “use machine learning”. Our algorithms are usually quite simple and the machine learning framework is used to generate and test as many of these algorithms as possible. That’s the way we approach the problem, though most people do it the opposite way, in my experience. You shouldn’t say “where is the money?” If your goal is money, go where the money is by asking questions like “who is making the money?” and “how do I do that?” because otherwise you’re basically saying that you’re so smart you can make your own path or that you can invent a path to the money. You need to start by learning to trade profitably and understanding the market before experimenting with machine learning for trading.