Most popular SSRN papers over the last 12 months:Order by:
Understanding Modern Portfolio Construction abstract Over the last 75 years there have been great strides in modern finance, portfolio theory and asset allocation strategies. Despite this progress the process of portfolio construction remains grounded in many theoretical concepts that can result in inappropriate or unrealistic frameworks. In this paper we provide an overview of the development of these ideas, construct a general foundation for understanding portfolio construction and produce a framework for simplifying, systematizing and streamlining the process in an attempt to establish a realistic and suitable process for portfolio construction.
'P' Versus 'Q': Differences and Commonalities between the Two Areas of Quantitative Finance abstract There exist two separate branches of finance that require advanced quantitative techniques: the "Q" area of derivatives pricing, whose task is to "extrapolate the present"; and the "P" area of quantitative risk and portfolio management, whose task is to "model the future."
We briefly trace the history of these two branches of quantitative finance, highlighting their different goals and challenges. Then we provide an overview of their areas of intersection: the notion of risk premium; the stochastic processes used, often under different names and assumptions in the Q and in the P world; the numerical methods utilized to simulate those processes; hedging; and statistical arbitrage.
The Siren Song of Factor Timing abstract Everyone seems to want to time factors. Often the first question after an initial discussion of factors is “ok, what’s the current outlook?” And the common answer, “the same as usual,” is often unsatisfying. There is powerful incentive to oversell timing ability. Factor investing is often done at fees in between active management and cap-weighted indexing and these fees have been falling over time. Factor timing has the potential of reintroducing a type of skill-based “active management” (as timing is generally thought of this way) back into the equation. I think that siren song should be resisted, even if that verdict is disappointing to some. At least when using the simple “value” of the factors themselves, I find such timing strategies to be very weak historically, and some tests of their long-term power to be exaggerated and/or inapplicable.
The Market for Financial Adviser Misconduct abstract We construct a novel database containing the universe of financial advisers in the United States from 2005 to 2015, representing approximately 10% of employment of the finance and insurance sector. Roughly 7% of advisers have misconduct records. Prior offenders are five times as likely to engage in new misconduct as the average financial adviser. Firms discipline misconduct: approximately half of financial advisers lose their job after misconduct. The labor market partially undoes firm-level discipline: of these advisers, 44% are reemployed in the financial services industry within a year. Reemployment is not costless. Following misconduct, advisers face longer unemployment spells, and move to less reputable firms, with a 10% reduction in compensation. Additionally, firms that hire these advisers also have higher rates of prior misconduct themselves. We find similar results for advisers of dissolved firms, in which all advisers are forced to find new employment independent of past misconduct or performance. Firms that persistently engage in misconduct coexist with firms that have clean records. We show that differences in consumer sophistication may be partially responsible for this phenomenon: misconduct is concentrated in firms with retail customers and in counties with low education, elderly populations, and high incomes. Our findings suggest that some firms "specialize" in misconduct and cater to unsophisticated consumers, while others use their reputation to attract sophisticated consumers.
My Factor Philippic abstract Arnott, Beck, Kalesnik, and West (2016) (ABKW) study smart beta or factor-based strategies and come to the following conclusions: (1) Aside from value, most popular factor strategies currently look expensive. (2) These expensive factor valuations portend lower future returns and a strong possibility of a future “factor crash” in which they go “horribly wrong.” And (3) many of these non-value factors were never real to start with because their historical performance was due to factor richening. That is, researchers mistook the one-time returns from factor richening for truly repeatable “structural alpha.” ABKW’s implied bottom line (their many protestations to only making modest recommendations aside): stick with value, dump the other factors. This essay elaborates on my response in Asness (2016). In summary: (1) I find non-value factor valuations moderately expensive, but not as expensive as ABKW. (2) I argue that ABKW exaggerate the power of factor timing by improperly using long-horizon regression techniques. More proper short-horizon regressions suggest some weak factor timing ability and given this predictability, I construct value-based tactical factor timing strategies to test them. Unfortunately, these strategies add little to portfolios that are already invested in the value factor. It turns out that this “newly” discovered timing tool is, yet again, mostly just a version of regular old value investing. And (3) I examine ABKW’s claim that factor richening drives much of non-value long-term factor performance and find that this very serious allegation about other researchers’ work is totally without merit. Overall, these results suggest that one should be wary of aggressive factor timing. Instead, investors are better off identifying factors they believe in, and staying diversified across them, unless we see far more extreme pricing than we do today.
The Market Portfolio is NOT Efficient: Evidences, Consequences and Easy to Avoid Errors abstract The Market Portfolio is not an efficient portfolio. There are many evidences that tell us that: the equal weighted indexes have beaten their market-value weighted indexes for many years, many easy-to-build portfolios (some “smart-beta”, “multifactors”) have beaten market-value weighted indexes. We document evidences about seven Equal weighted indexes that have had higher returns than the corresponding market-value weighted index: S&P500, MSCI Emerging Markets, FTSE 100, MSCI World. MSCI, DAX 30 and IBEX 35.
However, many finance and investment books still recommend to diversify in the same relative proportions as in a broad market index such as the Standard & Poor’s 500, many funds compare their performance with the return of market-value weighted indexes.
Without homogeneous expectations, the market portfolio cannot be an efficient portfolio for all investors.
In this document we also cover: a) volatility and beta being bad measures of risk; b) the unhelpfulness of the Sharpe ratio; and c) common (and easy to avoid) errors in portfolio management and corporate finance.
All that Glitters Is Not Gold: Comparing Backtest and Out-of-Sample Performance on a Large Cohort of Trading Algorithms abstract When automated trading strategies are developed and evaluated using backtests on historical pricing data, there exists a tendency to overfit to the past. Using a unique dataset of 888 algorithmic trading strategies developed and backtested on the Quantopian platform with at least 6 months of out-of-sample performance, we study the prevalence and impact of backtest overfitting. Specifically, we find that commonly reported backtest evaluation metrics like the Sharpe ratio offer little value in predicting out of sample performance (R² < 0.025). In contrast, higher order moments, like volatility and maximum drawdown, as well as portfolio construction features, like hedging, show significant predictive value of relevance to quantitative finance practitioners. Moreover, in line with prior theoretical considerations, we find empirical evidence of overfitting – the more backtesting a quant has done for a strategy, the larger the discrepancy between backtest and out-of-sample performance. Finally, we show that by training non-linear machine learning classifiers on a variety of features that describe backtest behavior, out-of-sample performance can be predicted at a much higher accuracy (R² = 0.17) on hold-out data compared to using linear, univariate features. A portfolio constructed on predictions on hold-out data performed significantly better out-of-sample than one constructed from algorithms with the highest backtest Sharpe ratios.
The Enduring Effect of Time-Series Momentum on Stock Returns Over Nearly 100-Years abstract This study documents the significant profitability of “time-series momentum” strategies in individual stocks in the US markets from 1927 to 2014 and in international markets since 1975. Unlike cross-sectional momentum, time-series stock momentum performs well following both up- and down-market states, and it does not suffer from January losses and market crashes. An easily formed dual-momentum strategy, combining time-series and cross-sectional momentum, generates striking returns of 1.88% per month. We test both risk based and behavioral models for the existence and durability of time-series momentum and suggest the latter offers unique insights into its continuing factor dominance.
Leverage for the Long Run - A Systematic Approach to Managing Risk and Magnifying Returns in Stocks abstract Using leverage to magnify performance is an idea that has enticed investors and traders throughout history. The critical question of when to employ leverage and when to reduce risk, though, is not often addressed. We establish that volatility is the enemy of leverage and that streaks in performance tend to be beneficial to using margin. The conditions under which higher returns would be achieved from using leverage, then, are low volatility environments that are more likely to experience consecutive positive returns. We find that Moving Averages are an effective way to identify such environments in a systematic fashion. When the broad U.S. equity market is above its Moving Average, stocks tend to exhibit lower than average volatility going forward, higher average daily performance, and longer streaks of positive returns. When below its Moving Average, the opposite tends to be true, as volatility often rises, average daily returns are lower, and streaks in positive returns become less frequent. Armed with this finding, we developed a strategy that employs leverage when the market is above its Moving Average and deleverages (moving to Treasury bills) when the market is below its Moving Average. This strategy shows better absolute and risk-adjusted returns than a comparable buy and hold unleveraged strategy as well as a constant leverage strategy. The results are robust to various leverage amounts, Moving Average time periods, and across multiple economic and financial market cycles.
Classification-Based Financial Markets Prediction Using Deep Neural Networks abstract Deep neural networks (DNNs) are powerful types of artificial neural networks (ANNs) that use several hidden layers. They have recently gained considerable attention in the speech transcription and image recognition community for their superior predictive properties including robustness to over fitting. However their application to algorithmic trading has not been previously researched, partly because of their computational complexity. This paper describes the application of DNNs to predicting financial market movement directions. In particular we describe the configuration and training approach and then demonstrate their application to back testing a simple trading strategy over 43 different Commodity and FX future mid-prices at 5-minute intervals. All results in this paper are generated using a C implementation on the Intel Xeon Phi co-processor which is 11.4x faster than the serial version and a Python strategy back testing environment both of which are available as open source code written by the authors.
Days to Cover and Stock Returns  abstract A crowded trade emerges when speculators' positions are large relative to the asset's liquidity, making exit difficult. We study this problem of recent regulatory concern by focusing on short-selling. We show that days to cover (DTC), the ratio of short interest to trading volume, measures the costliness of exiting crowded trades. Crowding is an important concern as short-sellers avoid illiquid stocks and require a premium to enter into such trades. A strategy shorting high DTC stocks and buying low DTC stocks generates a 1.2% monthly return. A comparably large days-to-cover effect exists on the long positions of levered hedge funds.
How to Combine a Billion Alphas abstract We give an explicit algorithm and source code for computing optimal weights for combining a large number N of alphas. This algorithm does not cost O(N^3) or even O(N^2) operations but is much cheaper, in fact, the number of required operations scales linearly with N. We discuss how in the absence of binary or quasi-binary "clustering" of alphas, which is not observed in practice, the optimization problem simplifies when N is large. Our algorithm does not require computing principal components or inverting large matrices, nor does it require iterations. The number of risk factors it employs, which typically is limited by the number of historical observations, can be sizably enlarged via using position data for the underlying tradables.
The Flash Crash: A New Deconstruction abstract On May 6, 2010, in the span of a mere four and half minutes, the Dow Jones Industrial Average lost approximately 1,000 points. In the following fifteen minutes it recovered essentially all of its losses. This “Flash Crash” occurred in the absence of fundamental news that could explain the observed price pattern and is generally viewed as the result of endogenous factors related to the complexity of modern equity market trading. We present the first analysis of the entire order book at millisecond granularity, and not just of executed transactions, in an effort to explore the causes of the Flash Crash. We also examine information flows as reflected in a variety of data feeds provided to market participants during the Flash Crash. While assertions relating to causation of the Flash Crash must be accompanied by significant disclaimers, we suggest that it is highly unlikely that, as alleged by the United States Government, Navinder Sarao’s spoofing orders, even if illegal, could have caused the Flash Crash, or that the crash was a foreseeable consequence of his spoofing activity. Instead, we find that the explanation offered by the joint CFTC-SEC Staff Report, which relies on prevailing market conditions combined with the introduction of a large equity sell order implemented in a particularly dislocating manner, is consistent with the data. We offer a simulation model that formalizes the process by which large sell orders of the sort observed in the CFTC-SEC Staff Report, combined with prevailing market conditions, could generate a Flash Crash in the absence of fundamental information. Our research also documents the emergence of heretofore unobserved anomalies in market data feeds that correlate very closely with the initiation of and recovery from the Flash Crash. Our analysis of these data feed anomalies is ongoing as we attempt to discern whether they were a symptom of the rapid trading that accompanied the Flash Crash or whether they were causal in the sense that they rationally contributed to traders’ decisions to withdraw liquidity and then restore it after the anomalies were resolved.
Revisiting the Profitability of Market Timing with Moving Averages abstract In a recent empirical study by Glabadanidis ("Market Timing With Moving Averages" (2015), International Review of Finance, Volume 15, Number 13, Pages 387-425; the paper is also available on the SSRN and has been downloaded more than 7,500 times) the author reports striking evidence of extraordinary good performance of the moving average trading strategy. In this paper we demonstrate that "too good to be true" reported performance of the moving average strategy is due to simulating the trading with look-ahead bias. We perform the simulations without look-ahead bias and report the true performance of the moving average strategy. We find that at best the performance of the moving average strategy is only marginally better than that of the corresponding buy-and-hold strategy. In statistical terms, the performance of the moving average strategy is indistinguishable from the performance of the buy-and-hold strategy. This paper is supplied with R code that allows every interested reader to reproduce the reported results.
How Rigged are Stock Markets? Evidence from Microsecond Timestamps abstract We use new timestamp data from the two Securities Information Processors (SIPs) to examine SIP reporting latencies for quote and trade reports. Reporting latencies average 1.13 milliseconds for quotes and 22.84 milliseconds for trades. Despite these latencies, liquidity-taking orders gain on average $0.0002 per share when priced at the SIP-reported national best bid or offer (NBBO) rather than the NBBO calculated using exchanges’ direct data feeds. Trading surrounding SIP-priced trades shows little evidence that fast traders initiate these liquidity-taking orders to pick-off stale quotes. These findings contradict claims that fast traders systematically exploit traders who transact at the SIP NBBO.
Statement of Financial Disclosure and Conflict of Interest: Neither author has any financial interest or affiliation (including research funding) with any commercial organization that has a financial interest in the findings of this paper. The authors are grateful to the University of California, Berkeley School of Law, for providing general faculty research support.
The Wisdom of Twitter Crowds: Predicting Stock Market Reactions to FOMC Meetings via Twitter Feeds abstract With the rise of social media, investors have a new tool to measure sentiment in real time. However, the nature of these sources of data raises serious questions about its quality. Since anyone on social media can participate in a conversation about markets -- whether they are informed or not -- it is possible that this data may have very little information about future asset prices. In this paper, we show that this is not the case by analyzing a recurring event that has a high impact on asset prices: Federal Open Market Committee (FOMC) meetings. We exploit a new dataset of tweets referencing the Federal Reserve and shows that the content of tweets can be used to predict future returns, even after controlling for common asset pricing factors. To gauge the economic magnitude of these predictions, the authors construct a simple hypothetical trading strategy based on this data. They find that a tweet-based asset-allocation strategy outperforms several benchmarks, including a strategy that buys and holds a market index as well as a comparable dynamic asset allocation strategy that does not use Twitter information.
The Harm in Selecting Funds that Have Recently Outperformed abstract We empirically investigate the investment results of commonly used fund selection strategies that involve redeploying assets from underperforming to outperforming funds. Based on portfolios constructed using U.S. mutual fund data over typical three-year evaluation periods, we find that investors who chose funds with poor recent performance earned higher excess returns than those who chose funds with superior recent performance. Our findings pose a challenge for asset owners: If past performance is used at all in selecting funds, it is the best-performing funds that should be replaced. Realistically, however, a policy of replacing successful funds with poor performers is unlikely to gain widespread acceptance. Instead, the practical implication of our paper is that asset owners should focus on factors other than past performance. We offer alternate criteria for selecting funds.
Risk Everywhere: Modeling and Managing Volatility abstract Based on a unique high-frequency dataset for more than fifty commodities, currencies, equity indices, and fixed income instruments spanning more than two decades, we document strong similarities in realized volatilities patterns across assets and asset classes. Exploiting these similarities within and across asset classes in panel-based estimation of new realized volatility models results in superior out-of-sample R2's compared to forecasts from existing models and more conventional procedures that do not incorporate the information in the high-frequency intraday data and/or the commonalities in the volatilities. We present a framework to evaluate the utility gains from the use of risk models, highlighting the interplay between transaction costs, the speed of different risk models, and their practical implementation.
Rentabilidad de los Fondos de Pensiones en España. 2000-2015 (Return of Pension Funds in Spain. 2000-2015) abstract <b>Spanish Abstract:</b> En el periodo diciembre 2000 - diciembre 2015, la rentabilidad anual media del IBEX 35 fue 4,62% y la de los bonos del Estado a 15 años 5,40%. La rentabilidad media de los fondos de pensiones fue 1,58%.
Entre los 322 fondos de pensiones con 15 años de historia, sólo 2 superaron la rentabilidad del IBEX 35, y sólo 1 superó la rentabilidad de los bonos del Estado a 15 años. 47 fondos tuvieron rentabilidad promedio ¡negativa!
Los fondos de pensiones tenían (diciembre 2015) 7,8 millones de partícipes y un patrimonio de €67.621millones.
El anexo 5 muestra algunos datos para animar a los lectores de menos de 50 años a tomar acciones ya para tratar de complementar su pensión de la Seguridad Social.
¿Cómo calificarías este “panorama”?
<b>English Abstract:</b> During the last 15 year period (2000-2015), the average return of the pension funds in Spain (1.58%) was lower than the return of Government Bonds (5.40%). Only 1 fund (out of 322) had a higher return than the 15-year Government Bonds. Nevertheless, on December 31, 2015, 7.8 million investors had 67.6 billion euros invested in pension funds.
The Economics of Disclosure and Financial Reporting Regulation: Evidence and Suggestions for Future Research abstract This paper discusses the empirical literature on the economic consequences of disclosure and financial reporting regulation (including IFRS adoption), drawing on U.S. and international evidence. Given the policy relevance of research on regulation, we highlight the challenges with: (i) quantifying regulatory costs and benefits, (ii) measuring disclosure and reporting outcomes, and (iii) drawing causal inferences from regulatory studies. Next, we discuss empirical studies that link disclosure and reporting activities to firm-specific and market-wide economic outcomes. Understanding these links is important when evaluating regulation. We then synthesize the empirical evidence on the economic effects of disclosure regulation and reporting standards, including the evidence on IFRS adoption. Several important conclusions emerge. We generally lack evidence on market-wide effects and externalities from regulation, yet such evidence is central to the economic justification of regulation. Moreover, evidence on causal effects of disclosure and reporting regulation is still relatively rare. We also lack evidence on the real effects of such regulation. These limitations provide many research opportunities. We conclude with several specific suggestions for future research.
Replicating Private Equity with Value Investing, Homemade Leverage, and Hold-to-Maturity Accounting abstract Private equity funds tend to select relatively small firms with low EBITDA multiples. Publicly traded equities with these characteristics have high risk-adjusted returns after controlling for common factors typically associated with value stocks. Hold-to-maturity accounting of portfolio net asset value eliminates the majority of measured risk. A passive portfolio of small, low EBITDA multiple stocks with modest amounts of leverage and hold-to-maturity accounting of net asset value produces an unconditional return distribution that is highly consistent with that of the pre-fee aggregate private equity index. The passive replicating strategy represents an economically large improvement in risk- and liquidity-adjusted returns over direct allocations to private equity funds, which charge average fees of 6% per year.
Protective Asset Allocation (PAA): A Simple Momentum-Based Alternative for Term Deposits abstract Since the financial crisis of 2008 and the recent (end of 2015) pull back, investors are searching for less risky investments. Therefore, there is a growing demand for low risk/absolute return portfolios. In this paper we describe a simple dual-momentum model (called Protective Asset Allocation or PAA) with a vigorous “crash protection” which might fit this bill. It is a tactical variation on the traditional 60/40 stock/bond portfolio where the optimal stock/bond mix is determined by multi-market breadth using dual momentum. We backtested the model with several global multi-asset ETF-proxies. Starting from Dec 1970 allows us to investigate the behavior of PAA in periods with rate hikes as well. The in-sample (Dec 1970-Dec 1992) and out-of-sample returns of the most protective variant of our PAA strategy satisfy our absolute return requirement without compromising high returns. This makes PAA an appealing alternative for a 1-year term deposit.
Stock portfolio design and backtest overfitting abstract We demonstrate a computer program that designs a portfolio consisting of common securities, such as the constituents of the S&P 500 index, that achieves any desired profile via in-sample backtest optimization. Unfortunately, the program also shows that these portfolios typically perform erratically on more recent, out-of-sample data, which is symptomatic of selection bias. One implication of these results is that so-called smart beta funds, which are designed in-sample to deliver a desirable performance pro file, are likely to disappoint out-of-sample.
Predicting Stock Market Returns Using the Shiller CAPE — An Improvement Towards Traditional Value Indicators? abstract Existing research indicates that it is possible to forecast potential long-term returns in the S&P 500 for periods of more than 10 years using the cyclically adjusted price-to-earnings ratio (CAPE). This paper concludes that this relationship has also existed internationally in 17 MSCI Country indexes since 1979. In addition, the paper also examines the forecasting ability of price-to-earnings, price-to-cash-flow and price-to-book ratio, as well as that of dividend yield and of CAPE adjusted for changes in payout ratios. The results indicate that only price-to-book ratio and CAPE enable reliable forecasts on subsequent returns and market risks. In countries with structural breaks, price-to-book ratio even exhibits some advantages compared to CAPE.
Based on these findings, the long-term equity market potential for various markets is forecasted using CAPE and price-to-book ratio. The current valuation makes it likely that investors with a global portfolio can achieve real returns of 6% over the next 10 to 15 years. Even greater increases can be expected in European equity markets (8%) and in emerging markets (9%). Due to the high valuation of the US stock market, US investors can only expect below-average returns of 4% with a higher drawdown potential.
Statistical Industry Classification abstract We give complete algorithms and source code for constructing (multilevel) statistical industry classifications, including methods for fixing the number of clusters at each level (and the number of levels). Under the hood there are clustering algorithms (e.g., k-means). However, what should we cluster? Correlations? Returns? The answer turns out to be neither and our backtests suggest that these details make a sizable difference. We also give an algorithm and source code for building "hybrid" industry classifications by improving off-the-shelf "fundamental" industry classifications by applying our statistical industry classification methods to them. The presentation is intended to be pedagogical and geared toward practical applications in quantitative trading.
From 'Blockchain Hype' to a Real Business Case for Financial Markets abstract Introduction: Blockchain Hype vs Blockchain Seclusion?
There has been a lot of noise in the press about the great potential uses for financial markets of Bitcoin-related technology, that could be extracted from the Bitcoin world and applied to existing markets to increase efficiency dramatically. Later, there has been a lot of noise about the fact that there is no actual use but all boils down to a generic enthusiasm called Blockchain Hype, and Bitcoin is the only reality where such technology can be fruitfully used.
This paper shows that there are real business cases for improving financial markets based on the lesson learnt from cryptocurrencies, but, differently from what the hype-enthusiasts say, they are not application of a technology to the existing business model of financial markets. They are reforms of the business model itself. What needs to be exported from the world of cryptocurrencies are aspects of the market organization, inspiration for a different accounting and legal system, and some aspects of the technology. These can be a huge contribution towards more robust, efficient and stable markets, but the process cannot be immediate and effortless, and can only be achieved within a market-wide strategic view.
One crucial misunderstanding here is the idea that Blockchain Technology can be exported to financial markets as they are to make them more efficient. This is meaningless; Blockchain technology was created to change some trust-based business processes to make them less reliant on trust; without structural changes in this direction the best of Blockchain technology is lost and just the inefficiencies are left. This misunderstanding is the perfect partner of the idea that Blockchain technology cannot be used outside the Bitcoin world. This is equally meaningless; Bitcoin was created to attempt a level of independence from trust sufficient to allow players to be anonymous and do without any legal protection; other business solutions based on a level of trust intermediate between Bitcoin and traditional finance can use similar technology and yet be very different from Bitcoins. But we must ready to use the concept of trust in a totally different way, as a way to analyze the different parts of a business process and the reason for its current inefficiencies and risks.
In the next we develop these concepts first in a parallel analysis of cryptocurrencies and financial markets. Then we focus on a specific business case regarding the collateralization of financial derivatives, that we describe bottom-up including quantifiable benefits in reducing costs, capital and risk. It is an example where the use of cryptocurrency technology is not more important than the business ideas developed in the analysis of cryptocurrencies; yet it was unconceivable before examples of distributed ledgers, smart contracts and oracles were visible in marketplaces. In fact, it was first presented in Morini and Sams (2015), in an introduction of the Blockchain innovation for the derivatives world.
Securities Clearance and Settlement Systems: A Guide to Best Practices abstract How to assess securities clearance and settlement systems, based on international standards and best practices.
As an essential part of a nation's financial sector infrastructure, securities clearance and settlement systems must be closely integrated with national payment systems so that safety, soundness, certainty, and efficiency can be achieved at a cost acceptable to all participants. Central banks have paid considerable attention to payment systems, but securities clearance and settlement systems have only recently been subjected to rigorous assessment.
The Western Hemisphere Payments and Securities Clearance and Settlement Initiative (WHI), led by the World Bank and in cooperation with the Centro de Estudios Monetarios Latinoamericanos (CEMLA), gave Guadamillas and Keppler a unique opportunity to observe how various countries in Latin America and the Caribbean undertake securities clearance and settlement. To do so, Guadamillas and Keppler developed a practical and implementable assessment methodology covering key issues that affect the quality of such systems.
In this paper they discuss the objectives, scope, and content of a typical securities system, identify the elements that influence the system's quality, and show how their assessment methodology works. They focus on the development of core principles and minimum standards for integrated systems of payments and securities clearance and settlement.
Their paper fills a gap by providing an evaluation tool for assessors of such systems, especially those who must assess evolving systems in developing and transition economies. Essentially, an assessment involves a structured analysis to answer four related questions:
- What are the objective and scope of a securities clearance and settlement system?
- Who are the participants, what roles do they play, and what expectations do they have?
- What procedures are required to satisfy the participants' needs?
- What inherent risks are involved, and how can they be mitigated at an acceptable cost?
This paper - a product of the Finance Cluster, Latin America and the Caribbean Region, and Financial Sector Infrastructure, Financial Sector Development Department - is part of a larger effort in the Bank to assess payment systems and securities clearance and settlement systems in Latin America and the Caribbean. The authors may be contacted at firstname.lastname@example.org or email@example.com.
Crash Beliefs from Investor Surveys abstract Historical data suggest that the base rate for a severe, single-day stock market crash is relatively low. Surveys of individual and institutional investors, conducted regularly over a 26 year period in the United States, show that they assess the probability to be much higher. We examine the factors that influence investor responses and test the role of media influence. We find evidence consistent with an availability bias. Recent market declines and adverse market events made salient by the financial press are associated with higher subjective crash probabilities. Non-market-related, rare disasters are also associated with higher subjective crash probabilities.
It Takes a Village to Maintain a Dangerous Financial System abstract I discuss the motivations and actions (or inaction) of individuals in the financial system, governments, central banks, academia and the media that collectively contribute to the persistence of a dangerous and distorted financial system and inadequate, poorly designed regulations. Reassurances that regulators are doing their best to protect the public are false. The underlying problem is a powerful mix of distorted incentives, ignorance, confusion, and lack of accountability. Willful blindness seems to play a role in flawed claims by the system’s enablers that obscure reality and muddle the policy debate.
Conflicts of Interest in Self-Regulation: Can Demutualized Exchanges Successfully Manage Them? abstract Carson examines the implications of demutualization of financial exchanges for their roles as self-regulatory organizations. Many regulators and exchanges believe that conflicts of interest increase when exchanges convert to for-profit businesses. Demutualization also changes the nature of an exchange's regulatory role as broker-dealers' ownership interests are reduced. These factors are leading to reduced regulatory roles for exchanges in many jurisdictions. The resulting changes have significant implications for regulation of financial markets, especially as exchanges are the only self-regulating organizations (SROs) in most countries. Major changes in the role of exchanges require a rethinking of the allocation of regulatory functions and the role of self-regulation, as well as stronger mechanisms to mitigate conflicts of interest.
Carson looks at the views of both exchanges and regulators on these issues in Asian, European, and North American jurisdictions where major exchanges have converted to for-profit businesses. He finds that views on the conflicts of interest faced by demutualized exchanges vary widely. In addition, the tools and processes used by exchanges and regulators to manage conflicts also differ significantly across jurisdictions. The author concludes that new and greater conflicts result from demutualization and canvasses the regulatory responses in the jurisdictions examined.
This paper - a product of the Financial Sector Operations and Policy Department - is part of a larger effort in the department to study the development of securities markets in emerging markets.
Market Risk Premium Used in 71 Countries in 2016: A Survey with 6,932 Answers abstract This paper contains the statistics of the Equity Premium or Market Risk Premium (MRP) used in 2016 for 71 countries. We got answers for more countries, but we only report the results for 71 countries with more than 8 answers. 54% of the MRP used in 2016 decreased (vs. 2015) and 38% increased.
Most previous surveys have been interested in the Expected MRP, but this survey asks about the Required MRP. The paper also contains the references used to justify the MRP, and comments from 46 persons.
Rethinking Margin Period of Risk abstract We describe a new framework for collateralized exposure modelling under an ISDA Master Agreement with a Credit Support Annex. The proposed model captures legal and operational aspects of default in considerably greater detail than models currently used by most practitioners, while remaining fully tractable and computationally feasible. Specifically, it considers the remedies and suspension rights available within these legal agreements; the firm's policies in availing itself of these rights; and the typical time it takes to exercise them in practice. The inclusion of these effects is shown to produce significantly higher credit exposure for representative portfolios compared to the currently used models. The increase is especially pronounced when dynamic initial margin is also present.
What's Hot in Finance (2011-2015)? abstract To catalyze my fourth-year Ph.D. students in the Hong Kong University of Science and Technology to think of new ideas after their comprehensive examinations, I asked each one of them to read the abstracts of finance articles published in the last 5 years in JF, JFE, RFS and JFQA. Here are some general observations from the hard data in the attached slides and the soft data generated in our discussion: 1) The journals are not as US-centric as commonly believed in Asia. Empirical papers using non-US data have risen to about 17% of all empirical papers, and this number is about the same in all 4 journals. 2) The number of authors per article is trending up, with the mean being about 2.6. 3) Classification is very difficult. There are more papers published which are not only intra-discipline but also inter-disciplinary. Macro-banking-finance is the hottest inter-disciplinary topic. 4) The ratio between theory to empirical is highest in RFS (about 1:2) and lowest in JFQA (about 1:6). No trends are discernible in asset pricing, but less theory papers are being published in corporate finance. 5) In corporate finance, the dominance of the old chestnuts – capital structure and corporate governance – is trending down. M&A is popular. Links between markets, banks and firms is a hot new area. 6) In asset pricing, identification of new sources of risk, and pricing of non-equity assets are hot new areas. 7) In investments, hedge funds, venture capital and private equity are hot. 8)) High frequency trading has resuscitated the market microstructure area. 9) Niche areas like household finance, culture and finance, politics and finance, labor and finance, media and finance, networks and finance, are not niche anymore. 10) Amongst finance journals, JF is No 1 followed by the RFS. 11) Top non-finance journals like the AER, JPE, QJE and Management Science publish many finance papers. With the notable exception of QJE, these journals now have lower impact factors than the top 3 finance journals.
Sticky Expectations and Stock Market Anomalies abstract We propose a simple model in which investors price a stock using a persistent signal and sticky belief dynamics à la Coibion and Gorodnichenko (2012). In this model, returns can be forecasted using (1) past profits, (2) past change in profits, and (3) past returns. The model thus provides a joint theory of two of the most economically significant anomalies, i.e. quality and momentum. According to the model, these anomalies should be correlated, and be stronger when signal persistence is higher, or when earnings expectations are stickier. Using I/B/E/S data, we measure expectation stickiness at the analyst level. We find that analysts are on average sticky and, consistent with a limited attention hypothesis, more so when they cover more industries. We then find strong support for the model's prediction in the data: both the momentum and the quality anomaly are stronger for stocks with more persistent profits, and for stocks which are followed by stickier analysts. Consistently with the model, both strategies also comove significantly.
Financial Reporting Quality of Chinese Reverse Merger Firms: The Reverse Merger Effect or the China Effect? abstract In this paper, we examine why Chinese reverse merger (RM) firms have lower financial reporting quality. We find that while U.S. RM firms have similar financial reporting quality as matched U.S. IPO firms, Chinese RM firms exhibit lower financial reporting quality than Chinese ADR firms. We further find that Chinese RM firms exhibit lower financial reporting quality than U.S. RM firms. These results indicate that the use of RM process is associated with poor financial reporting quality only in firms from China, where the legal enforcement is weaker than U.S. In addition, we find that compared to Chinese ADR firms, Chinese RM firms have lower CEO turnover performance sensitivity, a measure of bonding incentives, and poorer corporate governance, which in turn explains the lower financial reporting quality in Chinese RM firms. Overall the results suggest that the RM process provides Chinese firms with low bonding incentives and poor governance the opportunity to access the U.S. capital markets, resulting in poor financial reporting quality in Chinese RM firms.
Sell in May and Go Away in the Equity Index Futures Markets abstract The period May 1 to the turn of the month of November (last five trading days October) has historically produced negligible returns. The rest of the year (late October to the end of April) has essentially all the year's gains. In this paper we show that there is a statistically significant difference and conclude that the strategy go to cash in the weak period and go long in the strong period has about double the returns of buy and hold for large cap S&P500 index and triple for the small cap Russell2000 index during the period 1993-2015 in the index futures markets.
An Analysis of Index Option Writing with Monthly and Weekly Rollover abstract This paper analyzes the performance of the two CBOE PutWrite Indexes through the end of 2015. The two PutWrite indexes are found to have had strong performance in several areas: 1) Annual premium income: From 2006 to 2015, the average annual gross premium collected was 24.1 percent for the PUT Index and 39.3 percent for the WPUT Index. While a one-time premium collected by the weekly WPUT Index usually was smaller than a one-time premium collected by the monthly PUT Index, the WPUT Index had higher aggregate annual premiums because premiums were collected 52 times, rather than 12 times, per year. 2) Lower risk: Over the last 10 years, since the launch of Weeklys options, the WPUT Index had a lower standard deviation than the PUT and S&P 500 Indexes. The maximum drawdowns were 24.2 percent for the WPUT Index, 32.7 percent for the PUT Index and 50.9 percent for the S&P 500 Index. 3) Higher long-term returns with lower volatility: Looking longer-term with the PUT Index, since mid-1986, the annual compound return of the PUT Index was 10.13 percent, compared with 9.85 percent for the S&P 500 Index. The standard deviation of the PUT Index was substantially lower as well, 10.16 percent versus the S&P 500 Index’s 15.26 percent.
The Effects of Usury Laws on Higher-Risk Borrowers abstract In this Article, we exploit a natural experiment -- an unexpected judicial decision -- to study the effects of state usury laws on consumer loans to higher-risk borrowers. In May 2015, the U.S. Court of Appeals for the Second Circuit issued a decision that, in effect, switched on the usury laws of three States, rendering those laws enforceable against owners of consumer loans that had previously been issued under the expectation that the usury laws were preempted by federal statute. Using proprietary data from three marketplace lending platforms, we study the decision’s effect on consumer credit markets.
We find that the court’s decision significantly impaired credit availability for riskier borrowers, shrinking loan issuances to borrowers with the lowest FICO scores. We see no evidence, however, of strategic defaults by borrowers in these markets, despite the fact that the decision suggests that their loans are unenforceable. We also examine secondary market trading in notes backed by non-current, potentially usurious loans in the Second Circuit, and find that the decision reduced the prices of those notes. We do not, however, find evidence of a similar price decrease for notes backed by potentially usurious loans that the borrower continues to pay on time - suggesting that investors do not anticipate an increase in strategic defaults as a result of the court’s decision.
Gold and Silver Manipulation: What Can Be Empirically Verified? abstract The issue of gold and silver price manipulation, in particular price suppression, is examined. We use a mixture of normal approach to decompose the returns into abnormal and control samples. Price suppression is a form of market manipulation of the runs type where longer negative runs with lower returns than expected would be observed. To explore whether this form of manipulation can be empirically detected the length of runs and the total return observed during a run were computed for modelled abnormal and control clusters in gold and silver. In both metals the proportion of negative runs in the abnormal cluster is greater than the proportion of negative runs in the control cluster. In both cases the average return for negative runs is significantly lower in the abnormal cluster than in the control cluster. When average returns over positive runs are compared the abnormal group has significantly higher expected returns than the control group.
Given the short maximum run lengths in the abnormal cluster and the fact that positive runs have significantly higher average returns in the abnormal cluster than in the control cluster, it is likely that that the high volatility associated with the abnormal cluster is the driver of the results presented in this study, as opposed to manipulation.
How Prevalent and Profitable are Latency Arbitrage Opportunities on U.S. Stock Exchanges? abstract In this study, I examine the prevalence of latency arbitrage opportunities that arise due to the fragmentation of trading across multiple exchanges. I analyze order and quote data from the U.S. Securities and Exchange Commission's Market Information Data Analytics System (MIDAS), which aggregates consolidated feeds and direct proprietary feeds from each U.S. stock exchange. This paper provides evidence that high-frequency traders have numerous opportunities to realize profits from latency arbitrage. These opportunities are significantly more prevalent in larger stocks and on certain exchanges. I estimate that total potential profit from latency arbitrage opportunities in S&P 500 ticker symbols was approximately $3.03 billion in 2014.
Trend Without Hiccups - A Kalman Filter Approach abstract Have you ever felt miserable because of a sudden whipsaw in the price that triggered an unfortunate trade? In an attempt to remove this noise, technical analysts have used various types of moving averages (simple, exponential, adaptive one or using Nyquist criterion). These tools may have performed decently but we show in this paper that this can be improved dramatically thanks to the optimal filtering theory of Kalman filters (KF). We explain the basic concepts of KF and its optimum criterion. We provide a pseudo code for this new technical indicator that demystifies its complexity. We show that this new smoothing device can be used to better forecast price moves as lag is reduced. We provide 4 Kalman filter models and their performance on the SP500 mini-future contract. Results are quite illustrative of the efficiency of KF models with better net performance achieved by the KF model combining smoothing and extremum position.
Value Creation Thinking: Powerpoint Presentation abstract Long-term value creation begins with clarity about the purpose of the firm and about management's core responsibilities. Value creation is critically tied to how well management develops and maintains a knowledge-building culture. These ideas are plainly communicated in this PowerPoint presentation which summarizes my book, Value Creation Thinking. The presentation is well suited for classroom discussion and includes an explanation of the life-cycle valuation model, which is used extensively by money management firms worldwide. Also included are long-term, life-cycle charts of major firms that illustrate how managerial skill and competition interact to determine firms' long-term financial performance and, ultimately, shareholder returns.
Funding Value Adjustments abstract We demonstrate that large funding value adjustments (FVAs) being made by derivatives dealers to the disclosed valuations of their swap books are not consistent with any coherent notion of fair market value. Essentially the same funding cost adjustment is a reduction in the dealer's equity value. This reduction in equity value is exactly offset by the sum of an upward adjustment to a dealer's debt valuation (as a wealth transfer from shareholders) and a change in the present value of the dealer's financial distress costs. While others have already suggested that FVA accounting suffers from coherence problems, this paper is the first to identify and characterize these problems in the context of a full structural model of a dealer's balance sheet. In addition to giving an appropriate theoretical foundation for funding value adjustments, our model shows how dealers' bid and ask quotes should be adjusted so as to compensate shareholders for the impact of both funding costs and the dealer's own default risk. We also establish a pecking order for preferred swap financing strategies, characterize the valuation effects of initial margin financing (known as "MVA"), and provide a new interpretation of the standard debit value adjustment (DVA).
On the Profitability of Optimal Mean Reversion Trading Strategies abstract We study the profitability of optimal mean reversion trading strategies in the US equity market. Different from regular pair trading practice, we apply maximum likelihood method to construct the optimal static pairs trading portfolio that best fits the Ornstein-Uhlenbeck process, and rigorously estimate the parameters. Therefore, we ensure that our portfolios match the mean-reverting process before trading. We then generate contrarian trading signals using the model parameters. We also optimize the thresholds and the length of in-sample period by multiple tests. In nine good pair examples, we can see that our pairs exhibit high Sharpe ratio (above 1.9) over in-sample period and out-of-sample period. In particular, Crown Castle International Corp. (CCI) and HCP, Inc. (HCP) achieve a Sharpe ratio of 2.326 during in-sample test and a Sharpe ration of 2.425 in out-of-sample test. Crown Castle International Corp. CCI and (Realty Income Corporation) O achieve a Sharpe ratio of 2.405 and 2.903 separately during in-sample period and out-of-sample period.
Backtest Overfitting in Financial Markets abstract We introduce two online backtest overfitting tools: BODT simulates the overfitting of seasonal strategies (typical of technical analysis), and TMST simulates the overfitting of econometric strategies (typical of academic journals). We show that econometric methods lend themselves to extreme levels of overfitting, casting doubt on most investment strategies published in academic journals.
Factor Investing with Smart Beta Indices abstract The added value of smart beta indices is known to be explained by exposures to established factor premiums, but does that make these indices suitable for implementing a factor investing strategy? This paper finds that the amount of factor exposure provided by popular smart beta strategies differs considerably, as does their degree of focus on a single target factor. It also provides insight into how ‘quality’ and ‘high dividend’ indices relate to academic factors. Smart beta indices exhibit a performance that is in line with the amount of factor exposure provided, but it seems that they do not unlock the full potential offered by factor premiums. Altogether, these results imply that factor investing with smart beta indices is not as straightforward as one might think.
Factor-Based Investing abstract The asset management industry has seen a strong development of factor-based investing. The central idea is that each asset can be seen as a bundle of underlying factor sensitivities. A factor-based investing approach provides better insight into the risk decomposition of the investment portfolio’s assets and potentially leads to better investment decision-making.
In this paper we explore how investors should take account of underlying factors driving their portfolio returns. We show that underlying factors explain the majority of return variation among assets. We find there are times that a given factor sensitivity offers exceptionally high or low rewards in all assets exposed to it. These circumstances lead to an opportunity for market timing.
We propose a pragmatic and intuitive approach for identifying and measuring underlying factors in a portfolio via a heat map. We argue that investors seeking to adopt a factor-based approach use it in conjunction with traditional asset allocation, rather than as a substitute. In addition, we provide suggestions on how to embed the factor-based approach within an existing investment process.
Finally, a word about our relationship with ABN AMRO. The research project on factor-based investing is part of ABN AMRO Private Banking’s continuous process to challenge, and thereby to improve the investment process with new insights in the financial markets and investment approaches. Therefore we were given the task of writing this report.
Following the Money: Lessons from the Panama Papers, Part 1: Tip of the Iceberg abstract Widely known as the “Panama Papers,” the world’s largest whistleblower case to date consists of 11.5 million documents and involves a year-long effort by the International Consortium of Investigative Journalists to expose a global pattern of crime and corruption where millions of documents capture heads of state, criminals and celebrities using secret hideaways in tax havens. Involving the scrutiny by over 400 journalists worldwide, these documents reveal the offshore holdings of at least several hundred politicians and public officials, including the prime ministers of Iceland and Pakistan, the president of Ukraine, and the King of Saudi Arabia. More than 214,000 offshore entities appear in the leak, connected to people in more than 200 countries and territories.
Since these disclosures became public, national security implications already include abrupt regime change, and probable future political instability. It appears likely that important revelations obtained from these data will continue to be forthcoming for years to come. Presented here is Part 1 of what may ultimately constitute numerous-installment coverage of this important inquiry into the illicit wealth derived from bribery, corruption, and tax evasion. This article proceeds as follows. First, disclosures regarding the treasure trove of documents from the Panama-based law firm, Mossack Fonseca are reviewed. Second, is a discussion of the impact and cost of bribery and corruption to the global community. Third, I define and briefly explore issues surrounding “tax evasion.” Fourth, the impact of social media and technological change on transparency is discussed. Next, a few thoughts about implications for future research are offered.
Quantitative Style Investing abstract I introduce a systematic portfolio choice solution that significantly beats a benchmark market portfolio by an average of 34.2% per year after transaction costs. The corresponding annual Sharpe ratio is 1.97 per year compared to 0.42, over 4.7 times the size of the benchmark. A more conservative sample that excludes micro cap stocks yields an annual Sharpe ratio 2.18 times the benchmark. I construct my solution by applying multivariable cross-sectional regressions of six key stock characteristics, to aggregate forecasting signals from multiple sources. I apply simple filtering techniques to reduce estimation and sampling error, use only information known at time t, and predict expected returns. I validate the procedure by achieving commensurate results as prior studies when forming portfolios from decile sorts. However, by sorting stocks by expected returns into more extreme portfolios, i.e. 25 and 50 portfolios, I am able to further enhance performance gains over existing works.
The Value of Low Volatility abstract The evidence for the existence of a distinct low-volatility effect is mounting. However, implicit exposures to the Fama-French value factor (HML) seem to explain the performance of straightforward U.S. low-volatility strategies since 1963. In this paper I show that the value effect can neither explain the performance of large-cap low-volatility strategies pre-1963, nor post 1984, when the Fama-French value factor itself ceased to be effective in the large-cap segment of the market. Moreover, the performance of small-cap low-volatility strategies cannot be explained by the value effect during any period. Fama-MacBeth regressions support the existence of a low-volatility effect for every subsample. Based on these results and various other arguments I conclude that there exists a distinct low-volatility effect which cannot be explained by the value effect. The combined evidence even appears to be stronger for the low-volatility effect than for the value effect.