Trending academic research:

See also: Most popular SSRN papers

"
  • qBitcoin abstract A decentralized online quantum cash system, called qBitcoin, is given. We design the system which has great benefits of quantization in the following sense. Firstly, quantum teleportation technology is used for coin transaction, which prevents an owner of a coin from keeping the original coin data after sending the coin to another. This was a main problem in systems using classical information and a blockchain was introduced to solve this issue. In qBitcoin, the double-spending problem never happens and its security is guaranteed theoretically by virtue of quantum information theory. Making a bock is time consuming and the system of qBitcoin is based on a quantum chain, instead of blocks. Therefore a payment can be completed much faster than Bitcoin. Moreover we employ quantum digital signature so that it naturally inherits properties of peer-to-peer (P2P) cash system as originally proposed in Bitcoin.
  • Some stylized facts of the Bitcoin market abstract In recent years a new type of tradable assets appeared, generically known as cryptocurrencies. Among them, the most widespread is Bitcoin. Given its novelty, this paper investigates some statistical properties of the Bitcoin market. This study compares Bitcoin and standard currencies dynamics and focuses on the analysis of returns at different time scales. We test the presence of long memory in return time series from 2011 to 2017, using transaction data from one Bitcoin platform. We compute the Hurst exponent by means of the Detrended Fluctuation Analysis method, using a sliding window in order to measure long range dependence. We detect that Hurst exponents changes significantly during the first years of existence of Bitcoin, tending to stabilize in recent times. Additionally, multiscale analysis shows a similar behavior of the Hurst exponent, implying a self-similar process.
  • Optimal placement of a small order in a diffusive limit order book abstract We study the optimal placement problem of a stock trader who wishes to clear his/her inventory by a predetermined time horizon t, by using a limit order or a market order. For a diffusive market, we characterize the optimal limit order placement policy and analyze its behavior under different market conditions. In particular, we show that, in the presence of a negative drift, there exists a critical time t0>0 such that, for any time horizon t>t0, there exists an optimal placement, which, contrary to earlier work, is different from one that is placed "infinitesimally" close to the best ask, such as the best bid and second best bid. We also propose a simple method to approximate the critical time t0 and the optimal order placement.
  • Multi-scale analysis of lead-lag relationships in high-frequency financial markets abstract We propose a novel estimation procedure for scale-by-scale lead-lag relationships of financial assets observed at a high-frequency in a non-synchronous manner. The proposed estimation procedure does not require any interpolation processing of the original data and is applicable to quite fine resolution data. The validity of the proposed estimators is shown under the continuous-time framework developed in our previous work Hayashi and Koike (2016). An empirical application shows promising results of the proposed approach.
  • Optimum thresholding using mean and conditional mean square error abstract We consider a univariate semimartingale model for (the logarithm of) an asset price, containing jumps having possibly infinite activity (IA). The nonparametric threshold estimator of the integrated variance IV proposed in Mancini 2009 is constructed using observations on a discrete time grid, and precisely it sums up the squared increments of the process when they are below a threshold, a deterministic function of the observation step and possibly of the coefficients of X. All the threshold functions satisfying given conditions allow asymptotically consistent estimates of IV, however the finite sample properties of the truncated realized variation can depend on the specific choice of the threshold. We aim here at optimally selecting the threshold by minimizing either the estimation mean square error (MSE) or the conditional mean square error (cMSE). The last criterion allows to reach a threshold which is optimal not in mean but for the specific volatility (and jumps paths) at hand. A parsimonious characterization of the optimum is established, which turns out to be asymptotically proportional to the L\'evy's modulus of continuity of the underlying Brownian motion. Moreover, minimizing the cMSE enables us to propose a novel implementation scheme for approximating the optimal threshold. Monte Carlo simulations illustrate the superior performance of the proposed method.
  • Decoding Stock Market with Quant Alphas abstract We give an explicit algorithm and source code for extracting expected returns for stocks from expected returns for alphas. Our algorithm altogether bypasses combining alphas with weights into "alpha combos". Simply put, we have developed a new method for trading alphas which does not involve combining them. This yields substantial cost savings as alpha combos cost hedge funds around 3% of the P&L, while alphas themselves cost around 10%. Also, the extra layer of alpha combos, which our new method avoids, adds noise and suboptimality. We also arrive at our algorithm independently by explicitly constructing alpha risk models based on position data.
  • On the overestimation of the largest eigenvalue of a covariance matrix abstract In this paper, we use a new approach to prove that the largest eigenvalue of the sample covariance matrix of a normally distributed vector is bigger than the true largest eigenvalue with probability 1 when the dimension is infinite. We prove a similar result for the smallest eigenvalue.
  • 729 new measures of economic complexity (Addendum to Improving the Economic Complexity Index) abstract Recently we uploaded to the arxiv a paper entitled: Improving the Economic Complexity Index. There, we compared three metrics of the knowledge intensity of an economy, the original metric we published in 2009 (the Economic Complexity Index or ECI), a variation of the metric proposed in 2012, and a variation we called ECI+. It was brought to our attention that the definition of ECI+ was equivalent to the variation of the metric proposed in 2012. We have verified this claim, and found that while the equations are not exactly the same, they are similar enough to be our own oversight. More importantly, we now ask: how many variations of the original ECI work? In this paper we provide a simple unifying framework to explore multiple variations of ECI, including both the original 2009 ECI and the 2012 variation. We found that a large fraction of variations have a similar predictive power, indicating that the chance of finding a variation of ECI that works, after the seminal 2009 measure, are surprisingly high. In fact, more than 28 percent of these variations have a predictive power that is within 90 percent of the maximum for any variation. These findings show that, once the idea of measuring economic complexity was out, creating a variation with a similar predictive power (like the ones proposed in 2012) was trivial (a 1 in 3 shot). More importantly, the result show that using exports data to measure the knowledge intensity of an economy is a robust phenomenon that works for multiple functional forms. Moreover, the fact that multiple variations of the 2009 ECI perform close to the maximum, tells us that no variation of ECI will have a performance that is substantially better. This suggests that research efforts should focus on uncovering the mechanisms that contribute to the diffusion and accumulation of productive knowledge instead of on exploring small variations to existing measures.
  • Sure profits via flash strategies and the impossibility of predictable jumps abstract In an arbitrage-free financial market, asset prices should not exhibit jumps of a predictable magnitude at predictable times. We provide a rigorous formulation of this result in a fully general setting, only allowing for buy-and-hold positions and without imposing any semimartingale restriction. We show that asset prices do not exhibit predictable jumps if and only if there is no possibility of obtaining sure profits via high-frequency limits of buy-and-hold trading strategies. Our results imply that, under minimal assumptions, price changes occurring at scheduled dates should only be due to unanticipated information releases.
  • Nonlinear price impact from linear models abstract The impact of trades on asset prices is a crucial aspect of market dynamics for academics, regulators and practitioners alike. Recently, universal and highly nonlinear master curves were observed for price impacts aggregated on all intra-day scales [1]. Here we investigate how well these curves, their scaling, and the underlying return dynamics are captured by linear "propagator" models. We find that the classification of trades as price-changing versus non-price-changing can explain the price impact nonlinearities and short-term return dynamics to a very high degree. The explanatory power provided by the change indicator in addition to the order sign history increases with increasing tick size. To obtain these results, several long-standing technical issues for model calibration and -testing are addressed. We present new spectral estimators for two- and three-point cross-correlations, removing the need for previously used approximations. We also show when calibration is unbiased and how to accurately reveal previously overlooked biases. Therefore, our results contribute significantly to understanding both recent empirical results and the properties of a popular class of impact models.
  • Order Flows and Limit Order Book Resiliency on the Meso-Scale abstract We investigate the behavior of limit order books on the meso-scale motivated by order execution scheduling algorithms. To do so we carry out empirical analysis of the order flows from market and limit order submissions, aggregated from tick-by-tick data via volume-based bucketing, as well as various LOB depth and shape metrics. We document a nonlinear relationship between trade imbalance and price change, which however can be converted into a linear link by considering a weighted average of market and limit order flows. We also document a hockey-stick dependence between trade imbalance and one-sided limit order flows, highlighting numerous asymmetric effects between the active and passive sides of the LOB. To address the phenomenological features of price formation, book resilience, and scarce liquidity we apply a variety of statistical models to test for predictive power of different predictors. We show that on the meso-scale the limit order flows (as well as the relative addition/cancellation rates) carry the most predictive power. Another finding is that the deeper LOB shape, rather than just the book imbalance, is more relevant on this timescale. The empirical results are based on analysis of six large-tick assets from Nasdaq.
  • A General Class of Multifractional Processes and its Application to Cross-listing Stocks abstract We introduce a general class of multifractional stochastic processes driven by a multifractional Brownian motion and study estimation of their pointwise H\"older exponents (PHE) using the localized generalized quadratic variation approach. By comparing it with the other two benchmark estimation approaches through a simulation study, we show that our estimator has better performance in the case where the observed process is some unknown bivariate function of time and multifractional Brownian motion. The time-varying PHE feature allows us to apply such class of multifractional processes to model stock prices under various market conditions. An empirical study on modeling cross-listed stocks provides new evidence that equity's path roughness varies via time and price informativeness properties from global markets.
  • Cardinality constrained portfolio selection via factor models abstract In this paper we propose and discuss different 0-1 linear models in order to solve the cardinality constrained portfolio problem by using factor models. Factor models are used to build portfolios to track indexes, together with other objectives, also need a smaller number of parameters to estimate than the classical Markowitz model. The addition of the cardinality constraints limits the number of securities in the portfolio. Restricting the number of securities in the portfolio allows us to obtain a concentrated portfolio, reduce the risk and limit transaction costs. To solve this problem, a pure 0-1 model is presented in this work, the 0-1 model is constructed by means of a piecewise linear approximation. We also present a new quadratic combinatorial problem, called a minimum edge-weighted clique problem, to obtain an equality weighted cardinality constrained portfolio. A piecewise linear approximation for this problem is presented in the context of a multi factor model. For a single factor model, we present a fast heuristic, based on some theoretical results to obtain an equality weighted cardinality constraint portfolio. The consideration of a piecewise linear approximation allow us to reduce significantly the computation time required for the equivalent quadratic problem. Computational results from the 0-1 models are compared to those using a state-of-the-art Quadratic MIP solver.
  • Why we like the ECI+ algorithm abstract Recently a measure for Economic Complexity named ECI+ has been proposed by Albeaik et al. We like the ECI+ algorithm because it is mathematically identical to the Fitness algorithm, the measure for Economic Complexity we introduced in 2012. We demonstrate that the mathematical structure of ECI+ is strictly equivalent to that of Fitness (up to normalization and rescaling). We then show how the claims of Albeaik et al. about the ability of Fitness to describe the Economic Complexity of a country are incorrect. Finally, we hypothesize how the wrong results reported by these authors could have been obtained by not iterating the algorithm.
  • Technology networks: the autocatalytic origins of innovation abstract We search an autocatalytic structure in networks of technological fields and evaluate its significance for technological change. To this aim we define a technology network based on the International Patents Classification, and we study if autocatalytic structures in the network foster innovation as measured by the rate of production of patents. The network is identified through patenting activity of geographical regions in different technology fields. Through our analysis we show how the technological landscape of the patents database evolves as a self-organising autocatalytic structure that grows in size, and arrives to cover the most part of the technology network. Technology classes in the core of the autocatalytic structure perform better in terms of their innovativeness, as measured by the rate of growth of the number of patents. Finally, the links between classes that define the autocatalytic structure of the technology network break the hierarchical structure of the database, and indicate that recombinant innovation and its autocatalytic patterns are an important stylised fact of technological change.
  • Exact probability distribution function for the volatility of cumulative production abstract In this paper we study the volatility and its probability distribution function for the cumulative production based on the experience curve hypothesis. This work presents a generalization of the study of volatility in [1], which addressed the effects of normally distributed noise in the production process. Due to its wide applicability in industrial and technological activities we present here the mathematical foundation for an arbitrary distribution function of the process, which we expect will pave the future research on production and market strategy.
  • Derivative-Based Optimization with a Non-Smooth Simulated Criterion abstract Indirect inference requires simulating realizations of endogenous variables from the model under study. When the endogenous variables are discontinuous functions of the model parameters, the resulting indirect inference criterion function is discontinuous and does not permit the use of derivative-based optimization routines. Using a specific class of measure changes, we propose a novel simulation algorithm that alleviates the underlying discontinuities inherent in the indirect inference criterion function, permitting the application of derivative-based optimization routines to estimate the unknown model parameters. Unlike competing approaches, this approach does not rely on kernel smoothing or bandwidth parameters. Several Monte Carlo examples that have featured in the literature on indirect inference with discontinuous outcomes illustrate the approach. These examples demonstrate that this new method gives superior performance over existing alternatives in terms of bias, variance and coverage.
  • On optimal periodic dividend strategies for Lévy risk processes abstract We revisit the optimal periodic dividend problem where dividend payments can only be made at the jump times of an independent Poisson process. Recent results have shown, in the dual (spectrally positive) model, the optimality of a periodic barrier strategy where dividends are paid at dividend-decision times if and only if the surplus is above some level. In this paper, we show its optimality for a spectrally negative L\'evy model with a completely monotone L\'evy density. The optimal strategies and the value functions are concisely written in terms of the scale function. Numerical results are also given.
  • Machine learning in sentiment reconstruction of the simulated stock market abstract In this paper we continue the study of the simulated stock market framework defined by the driving sentiment processes. We focus on the market environment driven by the buy/sell trading sentiment process of the Markov chain type. We apply the methodology of the Hidden Markov Models and the Recurrent Neural Networks to reconstruct the transition probabilities matrix of the Markov sentiment process and recover the underlying sentiment states from the observed stock price behavior.
  • The "Size Premium" in Equity Markets: Where is the Risk? abstract We find that when measured in terms of dollar-turnover, and once $\beta$-neutralised and Low-Vol neutralised, the Size Effect is alive and well. With a long term t-stat of $5.1$, the "Cold-Minus-Hot" (CMH) anomaly is certainly not less significant than other well-known factors such as Value or Quality. As compared to market-cap based SMB, CMH portfolios are much less anti-correlated to the Low-Vol anomaly. In contrast with standard risk premia, size-based portfolios are found to be virtually unskewed. In fact, the extreme risk of these portfolios is dominated by the large cap leg; small caps actually have a positive (rather than negative) skewness. The only argument that favours a risk premium interpretation at the individual stock level is that the extreme drawdowns are more frequent for small cap/turnover stocks, even after accounting for volatility. This idiosyncratic risk is however clearly diversifiable.
View More Headlines