Researcher Steven Riddiough on Volume and Alternative Data in FX

Steven Riddiough

Steven Riddiough is a Senior Lecturer (Assistant Professor) in the Department of Finance at the University of Melbourne. He joined the Department in 2015 after completing his PhD at the University of Warwick. His research interests are principally in the areas of international finance, empirical asset pricing, and household finance. His research has been presented at major international conferences in finance, including the WFA, AFA and EFA. His research has been published in the Review of Financial Studies and the IMF Economic Review.

Steven teaches International Finance at the University of Melbourne and has taught at both undergraduate and graduate levels at the University of Warwick and Imperial College London. Before joining the University of Melbourne, Steven worked in the International Finance Division at the Bank of England, acted as a consultant to the Canada Pension Plan Investment Board in London and Toronto, and worked as a Foreign Exchange Research Strategist for the Investment Bank, Credit Suisse.

Liza: Tell us about some of the things you are currently researching.

Steven: I’m currently spending a lot of time thinking about foreign exchange risk premia and the mechanics of FX markets in general. Despite being the biggest financial market, the FX market is still not fully understood from an academic perspective. That’s in large part due to it being primarily an OTC market which limits the availability of really high-quality datasets for researchers to analyze.

One topic I’m particularly interested in is the debate surrounding the impact of post-GFC regulation. Since the FX market is the largest OTC market in the world, any kind of regulation that affects it needs to be really carefully scrutinized for potentially unintended consequences. One question in this respect is whether greater transparency leads to a better functioning market. We tend to think revealing more information, such as information on who instigated a trade, would allow more information to be incorporated into prices, which improves the efficient workings of the market.

But that’s not necessarily the case. The FX market is essentially a big and fragmented network of dealers all providing liquidity. While each one only sees a small piece of the overall trading, that can still be enormously valuable to the dealer. We know from prior research that some trades reveal more information about future price movements than others, so it’s a huge advantage if you’re the only dealer with that information. Liquidity providers could therefore lose a substantial revenue stream if that information asymmetry was removed, which might discourage them from providing liquidity in the future.  Overall, it means there is a potential trade off between an increasingly transparent market and an increasingly liquid market, and it’s issues like this which I find fascinating to study.

Liza: What is unique about volume data in regards to the FX markets?

Steven: Information on volume is used widely across most markets, but FX volume still appears unique in the information it provides. We’ve known for a long time that volume in equity markets, for example, can be used to predict future returns. High volume is usually associated with a reversal in returns, which relates to something known as ‘price pressure’. But it’s different in the FX market. In our recent research, we find higher FX volume actually reduces the likelihood of a return reversal. We rationalize this as being driven by FX markets containing more informed trading and asymmetric information than other markets, which goes back to the earlier point that the structure of the FX market can result in some dealers knowing more about the state of the market than others.

Grace: Can you discuss how you use alternative data in your research?

Steven: I recently presented my work on FX volume at an alternative data conference in New York which struck me as ironic because you would think volume is about as unalternative as it gets. But that highlights how nebulous the term ‘alternative’ is when used to describe financial data, as it essentially means data that has not been used before in trying to explain returns. And in that respect FX volume is very ‘alternative’ because up until recently there has been a serious lack of high-quality data on aggregate FX volume.

In my own research the use of alternative data is not something I think about as a starting point. To me, the motivation for obtaining data is in its ability to address an underlying economic question that I consider important. But that’s because I’m an academic financial economist. It’s different from an investor’s perspective for whom the underlying question is usually the same and some variety of, “Can I predict returns, can I profit from this strategy, can I generate Alpha?” and in that respect alternative data can always be used to try and address those questions. But for me the question is sometimes the hardest part and putting new or interesting data first is a little like putting the cart-before-the-horse.

To me, the motivation for obtaining data is in its ability to address an underlying economic question that I consider important.

Liza: In your paper you mentioned that your results are not related to other commonly traded currency factors and strategies and thus offer substantial diversification benefits. Can you talk more about that and why your results are not related to those factors?

Steven: The majority of factors that we know about in currency markets are longer-term in nature, in that they don’t require high-frequency rebalancing in order to generate economic value. I’m referring here to strategies such as currency carry, value or momentum. Those strategies can be rebalanced monthly and don’t incur particularly high transaction costs.

In contrast our work on FX volume is at a much higher frequency. It involves trading a reversal strategy daily, which is automatically very different from, say, a currency momentum strategy. At a more theoretical level, the returns to the reversal strategy involve exploiting the compensation required by liquidity providers and no other well-known currency strategy is doing exactly that. But ultimately, it’s an empirical question of correlation for which we document an extremely weak relationship between our strategy and other common strategies leading to such strong diversification gains.

Grace: In your paper you use traditional mathematical models, such as a confidence interval and a regression. How do you think these traditional models compare to AI/machine learning algorithms for data analysis?

Steven: The aim with more traditional models is to investigate economic relationships in the data that are founded on economic theory. That’s a worthy endeavor because it can provide insight into cause-and-effect in the world around us.  But these relationships can also change over time and investors don’t have time to wait for academics to catch up – it’s important for them to incorporate these changing relationships immediately.  I think that’s where machine learning and AI have a role to play. The main issue is that machine learning and AI are often used in an atheoretical way, which means we might observe a relationship using those tools but we don’t know why. That might still work out well if it allows a trader to profit but it’s less helpful for academics wanting to say something concrete about an important economic outcome.

Grace: You talk about how exploiting changing relationships will become increasingly necessary.  Can you talk more about what you see on the horizon for machine learning in AI?

Steven: It’s an industry at the dawn of its life and it’s undoubtedly going to evolve considerably over the next few years.  I think we’re still trying to understand the best way to implement AI and machine learning in practice. It’s natural that many people at this stage are being employed to get the data aspect of the problem right.  Focusing on making sure that datasets are cleaned and in a usable format for analysis is crucial and at the heart of the industry. But the follow-on stage of understanding whether the data is actually valuable is equally important and likely requires a different set of skills to those possessed by the people cleaning and organizing the data. It’s here where individuals with skills at the intersection of data science and economic science become really important.

Grace: What do you think differentiates low quality data between more high quality data?

Steven: That’s a very difficult question to answer ex-ante. High quality data is something that ultimately provides you with (from an investor’s perspective) a source of predictability. But do you know whether a data set will provide you with predictable information before analyzing it? The answer in most cases will be ‘no’. So high quality versus low quality isn’t necessarily something you can spot immediately but there might be some immediate red flags. High quality data will tend to have more information about a broad swathe of the economy and therefore increase the likelihood that some aspect of the dataset is valuable.

But while more information is usually better, it still requires a specialized set of skills to see the economic relationships that might produce value. I think that’s going to be an issue for the users of alternative data in the future: can the right people be brought in to analyze the quality of a dataset? Determining whether data is of high quality or not may even require a specialist consultant for every new dataset considered.

Grace: Where do you go to find alternative data?

Steven: When it comes to research, I try not to put data first. That is potentially one of the biggest mistakes when it comes to the alternative data scene. It’s natural to put the data first because it’s unusual and interesting, but I think it’s important to think through the fundamental economic relationships that you’re interested in, and then try to understand whether there is data available that could potentially allow you to explore those relationships. As a general rule I wouldn’t go out looking for data without first having a reason to look for it.

As a general rule I wouldn’t go out looking for data without first having a reason to look for it.

Liza: What books or resources would you recommend for traders wanting to study volume or alternative data in general?

Steven: When it comes to volume, there’s a long history of research in the area. But for someone particularly interested in FX volume there is much less work, although I’ll unashamedly refer to some of the work I’ve done recently on this topic. Regarding alternative data, big data, and AI, one of the best resources I’ve seen recently is a report written by J.P. Morgan from 2017, which is a great reference tool for anyone interested in alternative datasets and the application of AI in trading.

View Steven Riddiough’s website here.

Start Algo Trading Today!

Get your free token to connect to the REST API and start trading. To generate your token:

  1. Register for a free practice account here.
  2. Log into fxcm.com and click on the account ID in the upper right corner.
  3. Click token management and generate your token.

Risk Warning: The FXCM Group does not guarantee accuracy and will not accept liability for any loss or damage which arise directly or indirectly from use of or reliance on information contained within the webinars. The FXCM Group may provide general commentary which is not intended as investment advice and must not be construed as such. FX/CFD trading carries a risk of losses in excess of your deposited funds and may not be suitable for all investors. Please ensure that you fully understand the risks involved.