Trending academic research:

See also: Most popular SSRN papers

"
  • Decoding Stock Market with Quant Alphas abstract We give an explicit algorithm and source code for extracting expected returns for stocks from expected returns for alphas. Our algorithm altogether bypasses combining alphas with weights into "alpha combos". Simply put, we have developed a new method for trading alphas which does not involve combining them. This yields substantial cost savings as alpha combos cost hedge funds around 3% of the P&L, while alphas themselves cost around 10%. Also, the extra layer of alpha combos, which our new method avoids, adds noise and suboptimality. We also arrive at our algorithm independently by explicitly constructing alpha risk models based on position data.
  • Some stylized facts of the Bitcoin market abstract In recent years a new type of tradable assets appeared, generically known as cryptocurrencies. Among them, the most widespread is Bitcoin. Given its novelty, this paper investigates some statistical properties of the Bitcoin market. This study compares Bitcoin and standard currencies dynamics and focuses on the analysis of returns at different time scales. We test the presence of long memory in return time series from 2011 to 2017, using transaction data from one Bitcoin platform. We compute the Hurst exponent by means of the Detrended Fluctuation Analysis method, using a sliding window in order to measure long range dependence. We detect that Hurst exponents changes significantly during the first years of existence of Bitcoin, tending to stabilize in recent times. Additionally, multiscale analysis shows a similar behavior of the Hurst exponent, implying a self-similar process.
  • Sure profits via flash strategies and the impossibility of predictable jumps abstract In an arbitrage-free financial market, asset prices should not exhibit jumps of a predictable magnitude at predictable times. We provide a rigorous formulation of this result in a fully general setting, only allowing for buy-and-hold positions and without imposing any semimartingale restriction. We show that asset prices do not exhibit predictable jumps if and only if there is no possibility of obtaining sure profits via high-frequency limits of buy-and-hold trading strategies. Our results imply that, under minimal assumptions, price changes occurring at scheduled dates should only be due to unanticipated information releases.
  • Multi-scale analysis of lead-lag relationships in high-frequency financial markets abstract We propose a novel estimation procedure for scale-by-scale lead-lag relationships of financial assets observed at a high-frequency in a non-synchronous manner. The proposed estimation procedure does not require any interpolation processing of the original data and is applicable to quite fine resolution data. The validity of the proposed estimators is shown under the continuous-time framework developed in our previous work Hayashi and Koike (2016). An empirical application shows promising results of the proposed approach.
  • Why we like the ECI+ algorithm abstract Recently a measure for Economic Complexity named ECI+ has been proposed by Albeaik et al. We like the ECI+ algorithm because it is mathematically identical to the Fitness algorithm, the measure for Economic Complexity we introduced in 2012. We demonstrate that the mathematical structure of ECI+ is strictly equivalent to that of Fitness (up to normalization and rescaling). We then show how the claims of Albeaik et al. about the ability of Fitness to describe the Economic Complexity of a country are incorrect. Finally, we hypothesize how the wrong results reported by these authors could have been obtained by not iterating the algorithm.
  • Order Flows and Limit Order Book Resiliency on the Meso-Scale abstract We investigate the behavior of limit order books on the meso-scale motivated by order execution scheduling algorithms. To do so we carry out empirical analysis of the order flows from market and limit order submissions, aggregated from tick-by-tick data via volume-based bucketing, as well as various LOB depth and shape metrics. We document a nonlinear relationship between trade imbalance and price change, which however can be converted into a linear link by considering a weighted average of market and limit order flows. We also document a hockey-stick dependence between trade imbalance and one-sided limit order flows, highlighting numerous asymmetric effects between the active and passive sides of the LOB. To address the phenomenological features of price formation, book resilience, and scarce liquidity we apply a variety of statistical models to test for predictive power of different predictors. We show that on the meso-scale the limit order flows (as well as the relative addition/cancellation rates) carry the most predictive power. Another finding is that the deeper LOB shape, rather than just the book imbalance, is more relevant on this timescale. The empirical results are based on analysis of six large-tick assets from Nasdaq.
  • Optimal placement of a small order in a diffusive limit order book abstract We study the optimal placement problem of a stock trader who wishes to clear his/her inventory by a predetermined time horizon t, by using a limit order or a market order. For a diffusive market, we characterize the optimal limit order placement policy and analyze its behavior under different market conditions. In particular, we show that, in the presence of a negative drift, there exists a critical time t0>0 such that, for any time horizon t>t0, there exists an optimal placement, which, contrary to earlier work, is different from one that is placed "infinitesimally" close to the best ask, such as the best bid and second best bid. We also propose a simple method to approximate the critical time t0 and the optimal order placement.
  • Cardinality constrained portfolio selection via factor models abstract In this paper we propose and discuss different 0-1 linear models in order to solve the cardinality constrained portfolio problem by using factor models. Factor models are used to build portfolios to track indexes, together with other objectives, also need a smaller number of parameters to estimate than the classical Markowitz model. The addition of the cardinality constraints limits the number of securities in the portfolio. Restricting the number of securities in the portfolio allows us to obtain a concentrated portfolio, reduce the risk and limit transaction costs. To solve this problem, a pure 0-1 model is presented in this work, the 0-1 model is constructed by means of a piecewise linear approximation. We also present a new quadratic combinatorial problem, called a minimum edge-weighted clique problem, to obtain an equality weighted cardinality constrained portfolio. A piecewise linear approximation for this problem is presented in the context of a multi factor model. For a single factor model, we present a fast heuristic, based on some theoretical results to obtain an equality weighted cardinality constraint portfolio. The consideration of a piecewise linear approximation allow us to reduce significantly the computation time required for the equivalent quadratic problem. Computational results from the 0-1 models are compared to those using a state-of-the-art Quadratic MIP solver.
  • Optimum thresholding using mean and conditional mean square error abstract We consider a univariate semimartingale model for (the logarithm of) an asset price, containing jumps having possibly infinite activity (IA). The nonparametric threshold estimator of the integrated variance IV proposed in Mancini 2009 is constructed using observations on a discrete time grid, and precisely it sums up the squared increments of the process when they are below a threshold, a deterministic function of the observation step and possibly of the coefficients of X. All the threshold functions satisfying given conditions allow asymptotically consistent estimates of IV, however the finite sample properties of the truncated realized variation can depend on the specific choice of the threshold. We aim here at optimally selecting the threshold by minimizing either the estimation mean square error (MSE) or the conditional mean square error (cMSE). The last criterion allows to reach a threshold which is optimal not in mean but for the specific volatility (and jumps paths) at hand. A parsimonious characterization of the optimum is established, which turns out to be asymptotically proportional to the L\'evy's modulus of continuity of the underlying Brownian motion. Moreover, minimizing the cMSE enables us to propose a novel implementation scheme for approximating the optimal threshold. Monte Carlo simulations illustrate the superior performance of the proposed method.
  • The "Size Premium" in Equity Markets: Where is the Risk? abstract We find that when measured in terms of dollar-turnover, and once $\beta$-neutralised and Low-Vol neutralised, the Size Effect is alive and well. With a long term t-stat of $5.1$, the "Cold-Minus-Hot" (CMH) anomaly is certainly not less significant than other well-known factors such as Value or Quality. As compared to market-cap based SMB, CMH portfolios are much less anti-correlated to the Low-Vol anomaly. In contrast with standard risk premia, size-based portfolios are found to be virtually unskewed. In fact, the extreme risk of these portfolios is dominated by the large cap leg; small caps actually have a positive (rather than negative) skewness. The only argument that favours a risk premium interpretation at the individual stock level is that the extreme drawdowns are more frequent for small cap/turnover stocks, even after accounting for volatility. This idiosyncratic risk is however clearly diversifiable.
  • Explicit expressions for European option pricing under a generalized skew normal distribution abstract Under a generalized skew normal distribution we consider the problem of European option pricing. Existence of the martingale measure is proved. An explicit expression for a given European option price is presented in terms of the cumulative distribution function of the univariate skew normal and the bivariate standard normal distributions. Some special cases are investigated in a greater detail. To carry out the sensitivity of the option price to the skew parameters, numerical methods are applied. Some concluding remarks and further works are given. The results obtained are extensions of the results provided by [4].
  • Exact probability distribution function for the volatility of cumulative production abstract In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in [1], which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on production and market strategy.
  • Derivative-Based Optimization with a Non-Smooth Simulated Criterion abstract Indirect inference requires simulating realizations of endogenous variables from the model under study. When the endogenous variables are discontinuous functions of the model parameters, the resulting indirect inference criterion function is discontinuous and does not permit the use of derivative-based optimization routines. Using a specific class of measure changes, we propose a novel simulation algorithm that alleviates the underlying discontinuities inherent in the indirect inference criterion function, permitting the application of derivative-based optimization routines to estimate the unknown model parameters. Unlike competing approaches, this approach does not rely on kernel smoothing or bandwidth parameters. Several Monte Carlo examples that have featured in the literature on indirect inference with discontinuous outcomes illustrate the approach. These examples demonstrate that this new method gives superior performance over existing alternatives in terms of bias, variance and coverage.
  • On optimal periodic dividend strategies for Lévy risk processes abstract We revisit the optimal periodic dividend problem where dividend payments can only be made at the jump times of an independent Poisson process. Recent results have shown, in the dual (spectrally positive) model, the optimality of a periodic barrier strategy where dividends are paid at dividend-decision times if and only if the surplus is above some level. In this paper, we show its optimality for a spectrally negative L\'evy model with a completely monotone L\'evy density. The optimal strategies and the value functions are concisely written in terms of the scale function. Numerical results are also given.
  • On the overestimation of the largest eigenvalue of a covariance matrix abstract In this paper, we use a new approach to prove that the largest eigenvalue of the sample covariance matrix of a normally distributed vector is bigger than the true largest eigenvalue with probability 1 when the dimension is infinite. We prove a similar result for the smallest eigenvalue.
  • Spurious memory in non-equilibrium stochastic models of imitative behavior abstract The origin of the long-range memory in the non-equilibrium systems is still an open problem as the phenomenon can be reproduced using models based on Markov processes. In these cases a notion of spurious memory is introduced. A good example of Markov processes with spurious memory is stochastic process driven by a non-linear stochastic differential equation (SDE). This example is at odds with models built using fractional Brownian motion (fBm). We analyze differences between these two cases seeking to establish possible empirical tests of the origin of the observed long-range memory. We investigate probability density functions (PDFs) of burst and inter-burst duration in numerically obtained time series and compare with the results of fBm. Our analysis confirms that the characteristic feature of the processes described by a one-dimensional SDE is the power-law exponent $3/2$ of the burst or inter-burst duration PDF. This property of stochastic processes might be used to detect spurious memory in various non-equilibrium systems, where observed macroscopic behavior can be derived from the imitative interactions of agents.
  • Machine learning in sentiment reconstruction of the simulated stock market abstract In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior.
  • A General Class of Multifractional Processes and its Application to Cross-listing Stocks abstract We introduce a general class of multifractional stochastic processes driven by a multifractional Brownian motion and study estimation of their pointwise H\"older exponents (PHE) using the localized generalized quadratic variation approach. By comparing it with the other two benchmark estimation approaches through a simulation study, we show that our estimator has better performance in the case where the observed process is some unknown bivariate function of time and multifractional Brownian motion. The time-varying PHE feature allows us to apply such class of multifractional processes to model stock prices under various market conditions. An empirical study on modeling cross-listed stocks provides new evidence that equity's path roughness varies via time and price informativeness properties from global markets.
  • A Mean Field Competition abstract We introduce a mean field game with rank-based reward: competing agents optimize their effort to achieve a goal, are ranked according to their completion time, and paid a reward based on their relative rank. First, we propose a tractable Poissonian model in which we can describe the optimal effort for a given reward scheme. Second, we study the principal--agent problem of designing an optimal reward scheme. A surprising, explicit design is found to minimize the time until a given fraction of the population has reached the goal.
  • 729 new measures of economic complexity (Addendum to Improving the Economic Complexity Index) abstract Recently we uploaded to the arxiv a paper entitled: Improving the Economic Complexity Index. There, we compared three metrics of the knowledge intensity of an economy, the original metric we published in 2009 (the Economic Complexity Index or ECI), a variation of the metric proposed in 2012, and a variation we called ECI+. It was brought to our attention that the definition of ECI+ was equivalent to the variation of the metric proposed in 2012. We have verified this claim, and found that while the equations are not exactly the same, they are similar enough to be our own oversight. More importantly, we now ask: how many variations of the original ECI work? In this paper we provide a simple unifying framework to explore multiple variations of ECI, including both the original 2009 ECI and the 2012 variation. We found that a large fraction of variations have a similar predictive power, indicating that the chance of finding a variation of ECI that works, after the seminal 2009 measure, are surprisingly high. In fact, more than 28 percent of these variations have a predictive power that is within 90 percent of the maximum for any variation. These findings show that, once the idea of measuring economic complexity was out, creating a variation with a similar predictive power (like the ones proposed in 2012) was trivial (a 1 in 3 shot). More importantly, the result show that using exports data to measure the knowledge intensity of an economy is a robust phenomenon that works for multiple functional forms. Moreover, the fact that multiple variations of the 2009 ECI perform close to the maximum, tells us that no variation of ECI will have a performance that is substantially better. This suggests that research efforts should focus on uncovering the mechanisms that contribute to the diffusion and accumulation of productive knowledge instead of on exploring small variations to existing measures.
View More Headlines