Subscribe: Journal of Forecasting
http://www3.interscience.wiley.com/rss/journal/2966
Added By: Feedage Forager Feedage Grade B rated
Language: English
Tags:
analysis  based  data  forecast  forecasting  information  method  model  models  new  paper  sample  series  time  volatility 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Journal of Forecasting

Journal of Forecasting



Wiley Online Library : Journal of Forecasting



Published: 2018-03-01T00:00:00-05:00

 



Forecasting the duration of short-term deflation episodes

2018-02-19T23:15:51.620653-05:00

The paper proposes a simulation-based approach to multistep probabilistic forecasting, applied for predicting the probability and duration of negative inflation. The essence of this approach is in counting runs simulated from a multivariate distribution representing the probabilistic forecasts, which enters the negative inflation regime. The marginal distributions of forecasts are estimated using the series of past forecast errors, and the joint distribution is obtained by a multivariate copula approach. This technique is applied for estimating the probability of negative inflation in China and its expected duration, with the marginal distributions computed by fitting weighted skew-normal and two-piece normal distributions to autoregressive moving average ex post forecast errors and using the multivariate Student t copula.



What does the tail of the distribution of current stock prices tell us about future economic activity?

2018-02-13T23:30:35.375048-05:00

This paper proposes three leading indicators of economic conditions estimated using current stock returns. The assumption underlying our approach is that current asset prices reflect all the available information about future states of economy. Each of the proposed indicators is related to the tail of the cross-sectional distribution of stock returns. The results show that the leading indicators have strong correlation with future economic conditions and usually make better out-of-sample predictions than two traditional competitors (random walk and the average of previous observations). Furthermore, quantile regressions reveal that the leading indicators have strong connections with low future economic activity.



Scenario planning: An investigation of the construct and its measurement

2018-02-09T05:00:47.379309-05:00

Scenario-planning academicians and practitioners have been observing for more than three decades the importance of this method in dealing with environmental uncertainty. However, there has been no valid scale that may help organizational leaders to act in practice. Our review of prior studies identifies some problems related to conceptualization, reliability, and validity of this construct. We address these concerns by developing and validating a measure of scenario planning based on Churchill's paradigm (Journal of Marketing Research, 1979, 16, 64–73). Our data analysis follows from a sample of 133 managers operating in the healthcare field in France. To validate our scale, we used three approaches: first, an exploratory factor analysis; second, an examination of psychometric proprieties of all dimensions; and third, a confirmatory factor analysis. The results of this study indicate that scenario planning is a multidimensional construct composed of three dimensions: information acquisition, knowledge dissemination, and scenario development and strategic choices.



New evidence on the robust identification of news shocks: Role of revisions in utilization-adjusted TFP series and term structure data

2018-02-09T04:45:42.991907-05:00

Data revisions and selections of appropriate forwarding-looking variables have a major impact on true identification of news shocks and quality of research findings derived from structural vector autoregression (SVAR) estimation. This paper revisits news shocks to identify the role of different vintages of total factor productivity (TFP) series and term structure of interest rates as major prognosticators of future economic growth. There is a growing strand of literature regarding the use of utilization-adjusted TFP series, provided by Fernald (Federal Reserve Bank of San Francisco, Working Paper Series, 2014) for identification of news shocks. We reestimate Barsky and Sims' (Journal of Monetary Economics, 2011, 58, 273–289) empirical analysis by employing 2007 and 2015 vintages of TFP data. We find substantial quantitative as well as qualitative differences among impulse response functions when using 2007 and 2015 vintages of TFP data. Output and hours initially decline, followed by quick reversal of both variables. In sharp contrast to results achieved by the 2007 vintage of TFP data, results achieved by the 2015 vintage of TFP data depict that output and hours will increase in response to positive TFP shock. By including term structure data in our VAR specification, total surprise technology shock and news shock account for 97% and 92% of the forecast error variance in total TFP and total output respectively. We find that revisions in TFP series over time ultimately impact the conclusion regarding news shocks on business cycles. Our results support the notion that term structure data help in better identification of news shock as compared to other forward-looking variables.



The influence of transparency on budget forecast deviations in municipal governments

2018-02-06T21:45:28.248318-05:00

This paper analyzes the impact of transparency on fiscal performance. Our sample considers the 100 largest Spanish municipalities for the years 2008, 2009, 2010, 2012, and 2014. The results show that the level of municipal transparency influences budget forecast deviations in tax revenues and current expenditures. On the one hand, less transparent municipalities overestimate their revenues, allowing them to provide more public services without an immediate increase in taxes. On the other, these local governments, which are aware of the overestimation of their revenues, may spend less than they budgeted. More transparent municipalities, meanwhile, seem to be more prudent in their revenue estimations, since they underestimate their revenues, meaning they can spend more than projected. Our results also show that the behavior of politicians is influenced by the phase of the electoral cycle in which they find themselves, with politicians overestimating expenditures in the year before election.



Quantile estimators with orthogonal pinball loss function

2018-02-06T21:15:33.080398-05:00

To guarantee stable quantile estimations even for noisy data, a novel loss function and novel quantile estimators are developed, by introducing the effective concept of orthogonal loss considering the noise in both response and explanatory variables. In particular, the pinball loss used in classical quantile estimators is improved into novel orthogonal pinball loss (OPL) by replacing vertical loss by orthogonal loss. Accordingly, linear quantile regression (QR) and support vector machine quantile regression (SVMQR) can be respectively extended into novel OPL-based QR and OPL-based SVMQR models. The empirical study on 10 publicly available datasets statistically verifies the superiority of the two OPL-based models over their respective original forms in terms of prediction accuracy and quantile property, especially for extreme quantiles. Furthermore, the novel OPL-based SVMQR model with both OPL and artificial intelligence (AI) outperforms all benchmark models, which can be used as a promising quantile estimator, especially for noisy data.



Volatility spillover from the US to international stock markets: A heterogeneous volatility spillover GARCH model

2018-02-02T00:41:36.428561-05:00

A recent study by Rapach, Strauss, and Zhou (Journal of Finance, 2013, 68(4), 1633–1662) shows that US stock returns can provide predictive content for international stock returns. We extend their work from a volatility perspective. We propose a model, namely a heterogeneous volatility spillover–generalized autoregressive conditional heteroskedasticity model, to investigate volatility spillover. The model specification is parsimonious and can be used to analyze the time variation property of the spillover effect. Our in-sample evidence shows the existence of strong volatility spillover from the US to five major stock markets and indicates that the spillover was stronger during business cycle recessions in the USA. Out-of-sample results show that accounting for spillover information from the USA can significantly improve the forecasting accuracy of international stock price volatility.



Forecasting realized volatility of oil futures market: A new insight

2018-01-24T01:21:08.470168-05:00

In this study we propose several new variables, such as continuous realized semi-variance and signed jump variations including jump tests, and construct a new heterogeneous autoregressive model for realized volatility models to investigate the impacts that those new variables have on forecasting oil price volatility. In-sample results indicate that past negative returns have greater effects on future volatility than that of positive returns, and our new signed jump variations have a significantly negative influence on the future volatility. Out-of-sample empirical results with several robust checks demonstrate that our proposed models can not only obtain better performance in forecasting volatility but also garner larger economic values than can the existing models discussed in this paper.



Modeling European industrial production with multivariate singular spectrum analysis: A cross-industry analysis

2018-01-22T21:36:20.972389-05:00

In this paper, an optimized multivariate singular spectrum analysis (MSSA) approach is proposed to find leading indicators of cross-industry relations between 24 monthly, seasonally unadjusted industrial production (IP) series for German, French, and UK economies. Both recurrent and vector forecasting algorithms of horizontal MSSA (HMSSA) are considered. The results from the proposed multivariate approach are compared with those obtained via the optimized univariate singular spectrum analysis (SSA) forecasting algorithm to determine the statistical significance of each outcome. The data are rigorously tested for normality, seasonal unit root hypothesis, and structural breaks. The results are presented such that users can not only identify the most appropriate model based on the aim of the analysis, but also easily identify the leading indicators for each IP variable in each country. Our findings show that, for all three countries, forecasts from the proposed MSSA algorithm outperform the optimized SSA algorithm in over 70% of cases. Accordingly, this new approach succeeds in identifying leading indicators and is a viable option for selecting the SSA choices L and r, which minimizes a loss function.



Restructuring performance prediction with a rebalanced and clustered support vector machine

2018-01-10T07:35:29.132407-05:00

This paper discusses whether asset restructuring can improve firm performance over decades. Variation in the stock price or the financial ratio is used as the dependent variable of either short- or long-term effectiveness to evaluate the variance both before and after asset restructuring. The result is varied. It is necessary to develop a foresight approach for the mixed situation. This work pioneers to forecast effectiveness of asset restructuring with a rebalanced and clustered support vector machine (RCS). The profitability variation 1 year before and after asset restructuring is used as the dependent variable. The current financial indicators of the year of asset restructuring are used as independent variables. Specially treated listed companies are used as research samples, as they frequently adopt asset restructuring. In modeling, the skew distribution of samples achieving and failing to achieve performance improvement with asset restructuring is handled with rebalancing. The similar experienced knowledge of asset restructuring to the current asset restructuring is filtered out with clustering. With the help from rebalancing and clustering, a support vector machine is constructed for prediction, together with other forecasting models of multivariate discriminant analysis, logistic regression, probit regression, and case-based reasoning. These models' standalone modes are used as benchmarks. The empirical results demonstrate the applicability of the RCS for forecasting effectiveness of asset restructuring.



Value-at-risk under market shifts through highly flexible models

2018-01-10T03:15:35.715897-05:00

Managing market risk under unknown future shocks is a critical issue for policymakers, investors, and professional risk managers. Despite important developments in market risk modeling and forecasting over recent years, market participants are still skeptical about the ability of existing econometric designs to accurately predict potential losses, particularly in the presence of hidden structural changes. In this paper, we introduce Markov-switching APARCH models under the skewed generalized t and the generalized hyperbolic distributions to fully capture the fuzzy dynamics and stylized features of financial market returns and to generate value-at-risk (VaR) forecasts. Our empirical analysis of six major stock market indexes shows the superiority of the proposed models in detecting and forecasting unobservable shocks on market volatility, and in calculating daily capital charges based on VaR forecasts.



Strategic asset allocation by mixing shrinkage, vine copula and market equilibrium

2018-01-03T23:30:41.112131-05:00

We propose a new portfolio optimization method combining the merits of the shrinkage estimation, vine copula structure, and Black–Litterman model. It is useful for many investors to satisfy simultaneously the three investment objectives: estimation sensitivity, asymmetric risks appreciation, and portfolio stability. A typical investor with such objectives is a sovereign wealth fund (SWF). We use China's SWF as an example to empirically test the method based on a 15-asset strategic asset allocation problem. Robustness tests using subsamples not only show the method's overall effectiveness but also manifest that the function of each component is as expected.



The versatility of spectrum analysis for forecasting financial time series

2017-12-26T06:30:32.786764-05:00

The versatility of the one-dimensional discrete wavelet analysis combined with wavelet and Burg extensions for forecasting financial times series with distinctive properties is illustrated with market data. Any time series of financial assets may be decomposed into simpler signals called approximations and details in the framework of the one-dimensional discrete wavelet analysis. The simplified signals are recomposed after extension. The final output is the forecasted time series which is compared to observed data. Results show the pertinence of adding spectrum analysis to the battery of tools used by econometricians and quantitative analysts for the forecast of economic or financial time series.



Volatility forecasting of crude oil market: A new hybrid method

2017-12-21T07:45:48.026975-05:00

Given the complex characteristics of crude oil price volatility, a new hybrid forecasting method based on the hidden Markov, exponential generalized autoregressive conditional heteroskedasticity, and least squares support vector machine models is proposed, and the forecasting performance of the new method is compared with that of well-recognized generalized autoregressive conditional heteroskedasticity class and other related forecasting methods. The results indicate that the new hybrid forecasting method can significantly improve forecasting accuracy of crude oil price volatility. Furthermore, the new method has been demonstrated to be more accurate for the forecast of crude oil price volatility particularly in a longer time horizon.



Extracting information shocks from the Bank of England inflation density forecasts

2017-12-10T23:26:22.636169-05:00

This paper shows how to extract the density of information shocks from revisions of the Bank of England's inflation density forecasts. An information shock is defined in this paper as a random variable that contains the set of information made available between two consecutive forecasting exercises and that has been incorporated into a revised forecast for a fixed point event. Studying the moments of these information shocks can be useful in understanding how the Bank has changed its assessment of risks surrounding inflation in the light of new information, and how it has modified its forecasts accordingly. The variance of the information shock is interpreted in this paper as a new measure of ex ante inflation uncertainty that measures the uncertainty that the Bank anticipates information perceived in a particular quarter will pose on inflation. A measure of information absorption that indicates the approximate proportion of the information content in a revised forecast that is attributable to information made available since the last forecast release is also proposed.



Methods for backcasting, nowcasting and forecasting using factor-MIDAS: With an application to Korean GDP

2017-11-29T06:51:09.461281-05:00

We utilize mixed-frequency factor-MIDAS models for the purpose of carrying out backcasting, nowcasting, and forecasting experiments using real-time data. We also introduce a new real-time Korean GDP dataset, which is the focus of our experiments. The methodology that we utilize involves first estimating common latent factors (i.e., diffusion indices) from 190 monthly macroeconomic and financial series using various estimation strategies. These factors are then included, along with standard variables measured at multiple different frequencies, in various factor-MIDAS prediction models. Our key empirical findings as follows. (i) When using real-time data, factor-MIDAS prediction models outperform various linear benchmark models. Interestingly, the “MSFE-best” MIDAS models contain no autoregressive (AR) lag terms when backcasting and nowcasting. AR terms only begin to play a role in “true” forecasting contexts. (ii) Models that utilize only one or two factors are “MSFE-best” at all forecasting horizons, but not at any backcasting and nowcasting horizons. In these latter contexts, much more heavily parametrized models with many factors are preferred. (iii) Real-time data are crucial for forecasting Korean gross domestic product, and the use of “first available” versus “most recent” data “strongly” affects model selection and performance. (iv) Recursively estimated models are almost always “MSFE-best,” and models estimated using autoregressive interpolation dominate those estimated using other interpolation methods. (v) Factors estimated using recursive principal component estimation methods have more predictive content than those estimated using a variety of other (more sophisticated) approaches. This result is particularly prevalent for our “MSFE-best” factor-MIDAS models, across virtually all forecast horizons, estimation schemes, and data vintages that are analyzed.



Google Trends and the forecasting performance of exchange rate models

2017-11-29T06:50:25.707728-05:00

In this paper, we use Google Trends data for exchange rate forecasting in the context of a broad literature review that ties the exchange rate movements with macroeconomic fundamentals. The sample covers 11 OECD countries’ exchange rates for the period from January 2004 to June 2014. In out-of-sample forecasting of monthly returns on exchange rates, our findings indicate that the Google Trends search query data do a better job than the structural models in predicting the true direction of changes in nominal exchange rates. We also observed that Google Trends-based forecasts are better at picking up the direction of the changes in the monthly nominal exchange rates after the Great Recession era (2008–2009). Based on the Clark and West inference procedure of equal predictive accuracy testing, we found that the relative performance of Google Trends-based exchange rate predictions against the null of a random walk model is no worse than the purchasing power parity model. On the other hand, although the monetary model fundamentals could beat the random walk null only in one out of 11 currency pairs, with Google Trends predictors we found evidence of better performance for five currency pairs. We believe that these findings necessitate further research in this area to investigate the extravalue one can get from Google search query data.



Robust forecast aggregation: Fourier L2E regression

2017-10-12T01:40:41.373242-05:00

The Good Judgment Team led by psychologists P. Tetlock and B. Mellers of the University of Pennsylvania was the most successful of five research projects sponsored through 2015 by the Intelligence Advanced Research Projects Activity to develop improved group forecast aggregation algorithms. Each team had at least 10 algorithms under continuous development and evaluation over the 4-year project. The mean Brier score was used to rank the algorithms on approximately 130 questions concerning categorical geopolitical events each year. An algorithm would return aggregate probabilities for each question based on the probabilities provided per question by thousands of individuals, who had been recruited by the Good Judgment Team. This paper summarizes the theorized basis and implementation of one of the two most accurate algorithms at the conclusion of the Good Judgment Project. The algorithm incorporated a number of pre- and postprocessing steps, and relied upon a minimum distance robust regression method called L2E. The algorithm was just edged out by a variation of logistic regression, which has been described elsewhere. Work since the official conclusion of the project has led to an even smaller gap.



Time series forecasting using functional partial least square regression with stochastic volatility, GARCH, and exponential smoothing

2017-10-12T01:36:22.053545-05:00

We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high-dimensional market data. By using its well-known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out-of-sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.



Issue Information

2018-02-14T01:31:52.320188-05:00

No abstract is available for this article.



Comparison of forecasting performances: Does normalization and variance stabilization method beat GARCH(1,1)-type models? Empirical evidence from the stock markets

2017-06-16T05:25:50.895255-05:00

In this paper, we present a comparison between the forecasting performances of the normalization and variance stabilization method (NoVaS) and the GARCH(1,1), EGARCH(1,1) and GJR-GARCH(1,1) models. Hence the aim of this study is to compare the out-of-sample forecasting performances of the models used throughout the study and to show that the NoVaS method is better than GARCH(1,1)-type models in the context of out-of sample forecasting performance. We study the out-of-sample forecasting performances of GARCH(1,1)-type models and NoVaS method based on generalized error distribution, unlike normal and Student's t-distribution. Also, what makes the study different is the use of the return series, calculated logarithmically and arithmetically in terms of forecasting performance. For comparing the out-of-sample forecasting performances, we focused on different datasets, such as S&P 500, logarithmic and arithmetic BİST 100 return series. The key result of our analysis is that the NoVaS method performs better out-of-sample forecasting performance than GARCH(1,1)-type models. The result can offer useful guidance in model building for out-of-sample forecasting purposes, aimed at improving forecasting accuracy.



Short-term salmon price forecasting

2017-07-03T23:20:49.751425-05:00

This study establishes a benchmark for short-term salmon price forecasting. The weekly spot price of Norwegian farmed Atlantic salmon is predicted 1–5 weeks ahead using data from 2007 to 2014. Sixteen alternative forecasting methods are considered, ranging from classical time series models to customized machine learning techniques to salmon futures prices. The best predictions are delivered by k-nearest neighbors method for 1 week ahead; vector error correction model estimated using elastic net regularization for 2 and 3 weeks ahead; and futures prices for 4 and 5 weeks ahead. While the nominal gains in forecast accuracy over a naïve benchmark are small, the economic value of the forecasts is considerable. Using a simple trading strategy for timing the sales based on price forecasts could increase the net profit of a salmon farmer by around 7%.



Forecasting house prices in OECD economies

2017-07-14T05:50:32.288277-05:00

In this paper, we forecast real house price growth of 16 OECD countries using information from domestic macroeconomic indicators and global measures of the housing market. Consistent with the findings for the US housing market, we find that the forecasts from an autoregressive model dominate the forecasts from the random walk model for most of the countries in our sample. More importantly, we find that the forecasts from a bivariate model that includes economically important domestic macroeconomic variables and two global indicators of the housing market significantly improve upon the univariate autoregressive model forecasts. Among all the variables, the mean square forecast error from the model with the country's domestic interest rates has the best performance for most of the countries. The country's income, industrial production, and stock markets are also found to have valuable information about the future movements in real house price growth. There is also some evidence supporting the influence of the global housing price growth in out-of-sample forecasting of real house price growth in these OECD countries.



A new parsimonious recurrent forecasting model in singular spectrum analysis

2017-07-18T05:45:56.897581-05:00

Singular spectrum analysis (SSA) is a powerful nonparametric method in the area of time series analysis that has shown its capability in different applications areas. SSA depends on two main choices: the window length L and the number of eigentriples used for grouping r. One of the most important issues when analyzing time series is the forecast of new observations. When using SSA for time series forecasting there are several alternative algorithms, the most widely used being the recurrent forecasting model, which assumes that a given observation can be written as a linear combination of the L−1 previous observations. However, when the window length L is large, the forecasting model is unlikely to be parsimonious. In this paper we propose a new parsimonious recurrent forecasting model that uses an optimal m(



Measuring the market risk of freight rates: A forecast combination approach

2017-08-01T02:15:35.345226-05:00

This paper addresses the issue of freight rate risk measurement via value at risk (VaR) and forecast combination methodologies while focusing on detailed performance evaluation. We contribute to the literature in three ways: First, we reevaluate the performance of popular VaR estimation methods on freight rates amid the adverse economic consequences of the recent financial and sovereign debt crisis. Second, we provide a detailed and extensive backtesting and evaluation methodology. Last, we propose a forecast combination approach for estimating VaR. Our findings suggest that our combination methods produce more accurate estimates for all the sectors under scrutiny, while in some cases they may be viewed as conservative since they tend to overestimate nominal VaR.



Projection of population structure in China using least squares support vector machine in conjunction with a Leslie matrix model

2017-07-26T01:20:33.6529-05:00

China is a populous country that is facing serious aging problems due to the single-child birth policy. Debate is ongoing whether the liberalization of the single-child policy to a two-child policy can mitigate China's aging problems without unacceptably increasing the population. The purpose of this paper is to apply machine learning theory to the demographic field and project China's population structure under different fertility policies. The population data employed derive from the fifth and sixth national census records obtained in 2000 and 2010 in addition to the annals published by the China National Bureau of Statistics. Firstly, the sex ratio at birth is estimated according to the total fertility rate based on least squares regression of time series data. Secondly, the age-specific fertility rates and age-specific male/female mortality rates are projected by a least squares support vector machine (LS-SVM) model, which then serve as the input to a Leslie matrix model. Finally, the male/female age-specific population data projected by the Leslie matrix in a given year serve as the input parameters of the Leslie matrix for the following year, and the process is iterated in this manner until reaching the target year. The experimental results reveal that the proposed LS-SVM-Leslie model improves the projection accuracy relative to the conventional Leslie matrix model in terms of the percentage error and mean algebraic percentage error. The results indicate that the total fertility ratio should be controlled to around 2.0 to balance concerns associated with a large population with concerns associated with an aging population. Therefore, the two-child birth policy should be fully instituted in China. However, the fertility desire of women tends to be low due to the high cost of living and the pressure associated with employment, particularly in the metropolitan areas. Thus additional policies should be implemented to encourage fertility.



Predicting US bank failures: A comparison of logit and data mining models

2017-08-08T03:00:40.862801-05:00

Predicting bank failures is important as it enables bank regulators to take timely actions to prevent bank failures or reduce the cost of rescuing banks. This paper compares the logit model and data mining models in the prediction of bank failures in the USA between 2002 and 2010 using levels and rates of change of 16 financial ratios based on a cross-section sample. The models are estimated for the in-sample period 2002–2009, while data for the year 2010 are used for out-of-sample tests. The results suggest that the logit model predicts bank failures in-sample less precisely than data mining models, but produces fewer missed failures and false alarms out-of-sample.



ERRATUM

2018-02-14T01:31:48.785326-05:00