Subscribe: Journal of Forecasting
http://www3.interscience.wiley.com/rss/journal/2966
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
bank failures  bank  data  forecast  forecasting  forecasts  leslie matrix  model  models  population  regression  time series  time 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Journal of Forecasting

Journal of Forecasting



Wiley Online Library : Journal of Forecasting



Published: 2017-12-01T00:00:00-05:00

 



Robust forecast aggregation: Fourier L2E regression

2017-10-12T01:40:41.373242-05:00

The Good Judgment Team led by psychologists P. Tetlock and B. Mellers of the University of Pennsylvania was the most successful of five research projects sponsored through 2015 by the Intelligence Advanced Research Projects Activity to develop improved group forecast aggregation algorithms. Each team had at least 10 algorithms under continuous development and evaluation over the 4-year project. The mean Brier score was used to rank the algorithms on approximately 130 questions concerning categorical geopolitical events each year. An algorithm would return aggregate probabilities for each question based on the probabilities provided per question by thousands of individuals, who had been recruited by the Good Judgment Team. This paper summarizes the theorized basis and implementation of one of the two most accurate algorithms at the conclusion of the Good Judgment Project. The algorithm incorporated a number of pre- and postprocessing steps, and relied upon a minimum distance robust regression method called L2E. The algorithm was just edged out by a variation of logistic regression, which has been described elsewhere. Work since the official conclusion of the project has led to an even smaller gap.



Time series forecasting using functional partial least square regression with stochastic volatility, GARCH, and exponential smoothing

2017-10-12T01:36:22.053545-05:00

We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high-dimensional market data. By using its well-known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out-of-sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.



Direct multiperiod forecasting for algorithmic trading

2017-09-15T03:06:22.946126-05:00

This paper examines the performance of iterated and direct forecasts for the number of shares traded in high-frequency intraday data. Constructing direct forecasts in the context of formulating volume weighted average price trading strategies requires the generation of a sequence of multistep-ahead forecasts. I discuss nonlinear transformations to ensure nonnegative forecasts and lag length selection for generating a sequence of direct forecasts. In contrast to the literature based on low-frequency macroeconomic data, I find that direct multiperiod forecasts can outperform iterated forecasts when the conditioning information set is dynamically updated in real time.



Predicting US bank failures: A comparison of logit and data mining models

2017-08-08T03:00:40.862801-05:00

Predicting bank failures is important as it enables bank regulators to take timely actions to prevent bank failures or reduce the cost of rescuing banks. This paper compares the logit model and data mining models in the prediction of bank failures in the USA between 2002 and 2010 using levels and rates of change of 16 financial ratios based on a cross-section sample. The models are estimated for the in-sample period 2002–2009, while data for the year 2010 are used for out-of-sample tests. The results suggest that the logit model predicts bank failures in-sample less precisely than data mining models, but produces fewer missed failures and false alarms out-of-sample.



Measuring the market risk of freight rates: A forecast combination approach

2017-08-01T02:15:35.345226-05:00

This paper addresses the issue of freight rate risk measurement via value at risk (VaR) and forecast combination methodologies while focusing on detailed performance evaluation. We contribute to the literature in three ways: First, we reevaluate the performance of popular VaR estimation methods on freight rates amid the adverse economic consequences of the recent financial and sovereign debt crisis. Second, we provide a detailed and extensive backtesting and evaluation methodology. Last, we propose a forecast combination approach for estimating VaR. Our findings suggest that our combination methods produce more accurate estimates for all the sectors under scrutiny, while in some cases they may be viewed as conservative since they tend to overestimate nominal VaR.



Projection of population structure in China using least squares support vector machine in conjunction with a Leslie matrix model

2017-07-26T01:20:33.6529-05:00

China is a populous country that is facing serious aging problems due to the single-child birth policy. Debate is ongoing whether the liberalization of the single-child policy to a two-child policy can mitigate China's aging problems without unacceptably increasing the population. The purpose of this paper is to apply machine learning theory to the demographic field and project China's population structure under different fertility policies. The population data employed derive from the fifth and sixth national census records obtained in 2000 and 2010 in addition to the annals published by the China National Bureau of Statistics. Firstly, the sex ratio at birth is estimated according to the total fertility rate based on least squares regression of time series data. Secondly, the age-specific fertility rates and age-specific male/female mortality rates are projected by a least squares support vector machine (LS-SVM) model, which then serve as the input to a Leslie matrix model. Finally, the male/female age-specific population data projected by the Leslie matrix in a given year serve as the input parameters of the Leslie matrix for the following year, and the process is iterated in this manner until reaching the target year. The experimental results reveal that the proposed LS-SVM-Leslie model improves the projection accuracy relative to the conventional Leslie matrix model in terms of the percentage error and mean algebraic percentage error. The results indicate that the total fertility ratio should be controlled to around 2.0 to balance concerns associated with a large population with concerns associated with an aging population. Therefore, the two-child birth policy should be fully instituted in China. However, the fertility desire of women tends to be low due to the high cost of living and the pressure associated with employment, particularly in the metropolitan areas. Thus additional policies should be implemented to encourage fertility.



A new parsimonious recurrent forecasting model in singular spectrum analysis

2017-07-18T05:45:56.897581-05:00

Singular spectrum analysis (SSA) is a powerful nonparametric method in the area of time series analysis that has shown its capability in different applications areas. SSA depends on two main choices: the window length L and the number of eigentriples used for grouping r. One of the most important issues when analyzing time series is the forecast of new observations. When using SSA for time series forecasting there are several alternative algorithms, the most widely used being the recurrent forecasting model, which assumes that a given observation can be written as a linear combination of the L−1 previous observations. However, when the window length L is large, the forecasting model is unlikely to be parsimonious. In this paper we propose a new parsimonious recurrent forecasting model that uses an optimal m(



Forecasting house prices in OECD economies

2017-07-14T05:50:32.288277-05:00

In this paper, we forecast real house price growth of 16 OECD countries using information from domestic macroeconomic indicators and global measures of the housing market. Consistent with the findings for the US housing market, we find that the forecasts from an autoregressive model dominate the forecasts from the random walk model for most of the countries in our sample. More importantly, we find that the forecasts from a bivariate model that includes economically important domestic macroeconomic variables and two global indicators of the housing market significantly improve upon the univariate autoregressive model forecasts. Among all the variables, the mean square forecast error from the model with the country's domestic interest rates has the best performance for most of the countries. The country's income, industrial production, and stock markets are also found to have valuable information about the future movements in real house price growth. There is also some evidence supporting the influence of the global housing price growth in out-of-sample forecasting of real house price growth in these OECD countries.



Regional, individual and political determinants of FOMC members' key macroeconomic forecasts

2017-07-09T23:56:07.925026-05:00

We study Federal Open Market Committee members' individual forecasts of inflation and unemployment in the period 1992–2004. Our results imply that Governors and Bank presidents forecast differently, with Governors submitting lower inflation and higher unemployment rate forecasts than bank presidents. For Bank presidents we find a regional bias, with higher district unemployment rates being associated with lower inflation and higher unemployment rate forecasts. Bank presidents' regional bias is more pronounced during the year prior to their elections or for nonvoting bank presidents. Career backgrounds or political affiliations also affect individual forecast behavior.



Multi-step forecasting in the presence of breaks

2017-07-06T23:01:02.947008-05:00

This paper analyzes the relative performance of multi-step AR forecasting methods in the presence of breaks and data revisions. Our Monte Carlo simulations indicate that the type and timing of the break affect the relative accuracy of the methods. The iterated autoregressive method typically produces more accurate point and density forecasts than the alternative multi-step AR methods in unstable environments, especially if the parameters are subject to small breaks. This result holds regardless of whether data revisions add news or reduce noise. Empirical analysis of real-time US output and inflation series shows that the alternative multi-step methods only episodically improve upon the iterated method.



Short-term salmon price forecasting

2017-07-03T23:20:49.751425-05:00

This study establishes a benchmark for short-term salmon price forecasting. The weekly spot price of Norwegian farmed Atlantic salmon is predicted 1–5 weeks ahead using data from 2007 to 2014. Sixteen alternative forecasting methods are considered, ranging from classical time series models to customized machine learning techniques to salmon futures prices. The best predictions are delivered by k-nearest neighbors method for 1 week ahead; vector error correction model estimated using elastic net regularization for 2 and 3 weeks ahead; and futures prices for 4 and 5 weeks ahead. While the nominal gains in forecast accuracy over a naïve benchmark are small, the economic value of the forecasts is considerable. Using a simple trading strategy for timing the sales based on price forecasts could increase the net profit of a salmon farmer by around 7%.



Comparison of forecasting performances: Does normalization and variance stabilization method beat GARCH(1,1)-type models? Empirical evidence from the stock markets

2017-06-16T05:25:50.895255-05:00

In this paper, we present a comparison between the forecasting performances of the normalization and variance stabilization method (NoVaS) and the GARCH(1,1), EGARCH(1,1) and GJR-GARCH(1,1) models. Hence the aim of this study is to compare the out-of-sample forecasting performances of the models used throughout the study and to show that the NoVaS method is better than GARCH(1,1)-type models in the context of out-of sample forecasting performance. We study the out-of-sample forecasting performances of GARCH(1,1)-type models and NoVaS method based on generalized error distribution, unlike normal and Student's t-distribution. Also, what makes the study different is the use of the return series, calculated logarithmically and arithmetically in terms of forecasting performance. For comparing the out-of-sample forecasting performances, we focused on different datasets, such as S&P 500, logarithmic and arithmetic BİST 100 return series. The key result of our analysis is that the NoVaS method performs better out-of-sample forecasting performance than GARCH(1,1)-type models. The result can offer useful guidance in model building for out-of-sample forecasting purposes, aimed at improving forecasting accuracy.



Does a lot help a lot? Forecasting stock returns with pooling strategies in a data-rich environment

2017-05-24T03:55:42.660102-05:00

A variety of recent studies provide a skeptical view on the predictability of stock returns. Empirical evidence shows that most prediction models suffer from a loss of information, model uncertainty, and structural instability by relying on low-dimensional information sets. In this study, we evaluate the predictive ability of various lately refined forecasting strategies, which handle these issues by incorporating information from many potential predictor variables simultaneously. We investigate whether forecasting strategies that (i) combine information and (ii) combine individual forecasts are useful to predict US stock returns, that is, the market excess return, size, value, and the momentum premium. Our results show that methods combining information have remarkable in-sample predictive ability. However, the out-of-sample performance suffers from highly volatile forecast errors. Forecast combinations face a better bias–efficiency trade-off, yielding a consistently superior forecast performance for the market excess return and the size premium even after the 1970s.



Yield curve forecast combinations based on bond portfolio performance

2017-05-12T05:50:39.134681-05:00

We propose an economically motivated forecast combination strategy in which model weights are related to portfolio returns obtained by a given forecast model. An empirical application based on an optimal mean–variance bond portfolio problem is used to highlight the advantages of the proposed approach with respect to combination methods based on statistical measures of forecast accuracy. We compute average net excess returns, standard deviation, and the Sharpe ratio of bond portfolios obtained with nine alternative yield curve specifications, as well as with 12 different forecast combination strategies. Return-based forecast combination schemes clearly outperformed approaches based on statistical measures of forecast accuracy in terms of economic criteria. Moreover, return-based approaches that dynamically select only the model with highest weight each period and discard all other models delivered even better results, evidencing not only the advantages of trimming forecast combinations but also the ability of the proposed approach to detect best-performing models. To analyze the robustness of our results, different levels of risk aversion and a different dataset are considered.



The informational content of unconventional monetary policy on precious metal markets

2017-03-31T06:05:29.898999-05:00

This paper investigates the informational content of unconventional monetary policies and its effect on commodity markets, adopting a nonlinear approach for modeling volatility. The main question addressed is how the Bank of England, Bank of Japan, and European Central Bank's (ECB's) announcements concerning monetary easing affect two major commodities: gold and silver. Our empirical evidence based on daily and high-frequency data suggests that relevant information causes ambiguous valuation adjustments as well as stabilization or destabilization effects. Specifically, there is strong evidence that the Japanese Central Bank strengthens the precious metal markets by increasing their returns and by causing stabilization effects, in contrast to the ECB, which has opposite results, mainly due to the heterogeneous expectations of investors within these markets. These asymmetries across central banks' effects on gold and silver risk–return profile imply that the ECB unconventional monetary easing informational content opposes its stated mission, adding uncertainty in precious metals markets.



Forecasting US interest rates and business cycle with a nonlinear regime switching VAR model

2017-03-14T02:45:35.842451-05:00

This paper introduces a regime switching vector autoregressive model with time-varying regime probabilities, where the regime switching dynamics is described by an observable binary response variable predicted simultaneously with the variables subject to regime changes. Dependence on the observed binary variable distinguishes the model from various previously proposed multivariate regime switching models, facilitating a handy simulation-based multistep forecasting method. An empirical application shows a strong bidirectional predictive linkage between US interest rates and NBER business cycle recession and expansion periods. Due to the predictability of the business cycle regimes, the proposed model yields superior out-of-sample forecasts of the US short-term interest rate and the term spread compared with the linear and nonlinear vector autoregressive (VAR) models, including the Markov switching VAR model.



Nonlinearities in the CAPM: Evidence from Developed and Emerging Markets

2016-01-25T21:26:29.828417-05:00

This paper examines the forecasting ability of the nonlinear specifications of the market model. We propose a conditional two-moment market model with a time-varying systematic covariance (beta) risk in the form of a mean reverting process of the state-space model via the Kalman filter algorithm. In addition, we account for the systematic component of co-skewness and co-kurtosis by considering higher moments. The analysis is implemented using data from the stock indices of several developed and emerging stock markets. The empirical findings favour the time-varying market model approaches, which outperform linear model specifications both in terms of model fit and predictability. Precisely, higher moments are necessary for datasets that involve structural changes and/or market inefficiencies which are common in most of the emerging stock markets. Copyright © 2016 John Wiley & Sons, Ltd.



A Comparison of the Forecasting Ability of Immediate Price Impact Models

2016-03-02T20:32:07.120303-05:00

As a consequence of recent technological advances and the proliferation of algorithmic and high-frequency trading, the cost of trading in financial markets has irrevocably changed. One important change, known as price impact, relates to how trading affects prices. Price impact represents the largest cost associated with trading. Forecasting price impact is very important as it can provide estimates of trading profits after costs and also suggest optimal execution strategies. Although several models have recently been developed which may forecast the immediate price impact of individual trades, limited work has been done to compare their relative performance. We provide a comprehensive performance evaluation of these models and test for statistically significant outperformance amongst candidate models using out-of-sample forecasts. We find that normalizing price impact by its average value significantly enhances the performance of traditional non-normalized models as the normalization factor captures some of the dynamics of price impact. Copyright © 2016 John Wiley & Sons, Ltd.



The US Dollar/Euro Exchange Rate: Structural Modeling and Forecasting During the Recent Financial Crises

2016-07-21T21:10:50.4521-05:00

The paper investigates the determinants of the US dollar/euro within the framework of the asset pricing theory of exchange rate determination, which posits that current exchange rate fluctuations are determined by the entire path of current and future revisions in expectations about fundamentals. In this perspective, we innovate by conditioning on Fama–French and Carhart risk factors, which directly measures changing market expectations about the economic outlook, on new financial condition indexes and macroeconomic variables. The macro-finance augmented econometric model has a remarkable in-sample and out-of-sample predictive ability, largely outperforming a standard autoregressive specification. We also document a stable relationship between the US dollar/euro Carhart momentum conditional correlation (CCW) and the euro area business cycle. CCW signals a progressive weakening in economic conditions since June 2014, consistent with the scattered recovery from the sovereign debt crisis and the new Greek solvency crisis exploded in late spring/early summer 2015. Copyright © 2016 John Wiley & Sons, Ltd.



Exploiting Spillovers to Forecast Crashes

2016-08-18T05:00:48.418333-05:00

We develop Hawkes models in which events are triggered through self-excitation as well as cross-excitation. We examine whether incorporating cross-excitation improves the forecasts of extremes in asset returns compared to only self-excitation. The models are applied to US stocks, bonds and dollar exchange rates. We predict the probability of crashes in the series and the value at risk (VaR) over a period that includes the financial crisis of 2008 using a moving window. A Lagrange multiplier test suggests the presence of cross-excitation for these series. Out-of-sample, we find that the models that include spillover effects forecast crashes and the VaR significantly more accurately than the models without these effects. Copyright © 2016 John Wiley & Sons, Ltd.



Forecasting the Daily Time-Varying Beta of European Banks During the Crisis Period: Comparison Between GARCH Models and the Kalman Filter

2016-09-14T02:26:03.389057-05:00

This intention of this paper is to empirically forecast the daily betas of a few European banks by means of four generalized autoregressive conditional heteroscedasticity (GARCH) models and the Kalman filter method during the pre-global financial crisis period and the crisis period. The four GARCH models employed are BEKK GARCH, DCC GARCH, DCC-MIDAS GARCH and Gaussian-copula GARCH. The data consist of daily stock prices from 2001 to 2013 from two large banks each from Austria, Belgium, Greece, Holland, Ireland, Italy, Portugal and Spain. We apply the rolling forecasting method and the model confidence sets (MCS) to compare the daily forecasting ability of the five models during one month of the pre-crisis (January 2007) and the crisis (January 2013) periods. Based on the MCS results, the BEKK proves the best model in the January 2007 period, and the Kalman filter overly outperforms the other models during the January 2013 period. Results have implications regarding the choice of model during different periods by practitioners and academics. Copyright © 2016 John Wiley & Sons, Ltd.



Modelling and Trading the English and German Stock Markets with Novelty Optimization Techniques

2016-11-06T22:15:29.455231-05:00

The motivation for this paper was the introduction of novel short-term models to trade the FTSE 100 and DAX 30 exchange-traded funds (ETF) indices. There are major contributions in this paper which include the introduction of an input selection criterion when utilizing an expansive universe of inputs, a hybrid combination of partial swarm optimizer (PSO) with radial basis function (RBF) neural networks, the application of a PSO algorithm to a traditional autoregressive moving model (ARMA), the application of a PSO algorithm to a higher-order neural network and, finally, the introduction of a multi-objective algorithm to optimize statistical and trading performance when trading an index. All the machine learning-based methodologies and the conventional models are adapted and optimized to model the index. A PSO algorithm is used to optimize the weights in a traditional RBF neural network, in a higher-order neural network (HONN) and the AR and MA terms of an ARMA model. In terms of checking the statistical and empirical accuracy of the novel models, we benchmark them with a traditional HONN, with an ARMA, with a moving average convergence/divergence model (MACD) and with a naïve strategy. More specifically, the trading and statistical performance of all models is investigated in a forecast simulation of the FTSE 100 and DAX 30 ETF time series over the period January 2004 to December 2015 using the last 3 years for out-of-sample testing. Finally, the empirical and statistical results indicate that the PSO-RBF model outperforms all other examined models in terms of trading accuracy and profitability, even with mixed inputs and with only autoregressive inputs. Copyright © 2016 John Wiley & Sons, Ltd.



Long Memory of Financial Time Series and Hidden Markov Models with Time-Varying Parameters

2016-09-13T01:05:46.520459-05:00

Hidden Markov models are often used to model daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations. Copyright © 2016 John Wiley & Sons, Ltd.