Subscribe: Journal of Forecasting
Added By: Feedage Forager Feedage Grade B rated
Language: English
bank failures  bank  china  data  failures  fertility  forecasting  forecasts  leslie matrix  leslie  model  population  time series 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Journal of Forecasting

Journal of Forecasting

Wiley Online Library : Journal of Forecasting

Published: 2017-09-01T00:00:00-05:00


Direct multiperiod forecasting for algorithmic trading


This paper examines the performance of iterated and direct forecasts for the number of shares traded in high-frequency intraday data. Constructing direct forecasts in the context of formulating volume weighted average price trading strategies requires the generation of a sequence of multistep-ahead forecasts. I discuss nonlinear transformations to ensure nonnegative forecasts and lag length selection for generating a sequence of direct forecasts. In contrast to the literature based on low-frequency macroeconomic data, I find that direct multiperiod forecasts can outperform iterated forecasts when the conditioning information set is dynamically updated in real time.

Predicting US bank failures: A comparison of logit and data mining models


Predicting bank failures is important as it enables bank regulators to take timely actions to prevent bank failures or reduce the cost of rescuing banks. This paper compares the logit model and data mining models in the prediction of bank failures in the USA between 2002 and 2010 using levels and rates of change of 16 financial ratios based on a cross-section sample. The models are estimated for the in-sample period 2002–2009, while data for the year 2010 are used for out-of-sample tests. The results suggest that the logit model predicts bank failures in-sample less precisely than data mining models, but produces fewer missed failures and false alarms out-of-sample.

Measuring the market risk of freight rates: A forecast combination approach


This paper addresses the issue of freight rate risk measurement via value at risk (VaR) and forecast combination methodologies while focusing on detailed performance evaluation. We contribute to the literature in three ways: First, we reevaluate the performance of popular VaR estimation methods on freight rates amid the adverse economic consequences of the recent financial and sovereign debt crisis. Second, we provide a detailed and extensive backtesting and evaluation methodology. Last, we propose a forecast combination approach for estimating VaR. Our findings suggest that our combination methods produce more accurate estimates for all the sectors under scrutiny, while in some cases they may be viewed as conservative since they tend to overestimate nominal VaR.

Projection of population structure in China using least squares support vector machine in conjunction with a Leslie matrix model


China is a populous country that is facing serious aging problems due to the single-child birth policy. Debate is ongoing whether the liberalization of the single-child policy to a two-child policy can mitigate China's aging problems without unacceptably increasing the population. The purpose of this paper is to apply machine learning theory to the demographic field and project China's population structure under different fertility policies. The population data employed derive from the fifth and sixth national census records obtained in 2000 and 2010 in addition to the annals published by the China National Bureau of Statistics. Firstly, the sex ratio at birth is estimated according to the total fertility rate based on least squares regression of time series data. Secondly, the age-specific fertility rates and age-specific male/female mortality rates are projected by a least squares support vector machine (LS-SVM) model, which then serve as the input to a Leslie matrix model. Finally, the male/female age-specific population data projected by the Leslie matrix in a given year serve as the input parameters of the Leslie matrix for the following year, and the process is iterated in this manner until reaching the target year. The experimental results reveal that the proposed LS-SVM-Leslie model improves the projection accuracy relative to the conventional Leslie matrix model in terms of the percentage error and mean algebraic percentage error. The results indicate that the total fertility ratio should be controlled to around 2.0 to balance concerns associated with a large population with concerns associated with an aging population. Therefore, the two-child birth policy should be fully instituted in China. However, the fertility desire of women tends to be low due to the high cost of living and the pressure associated with employment, particularly in the metropolitan areas. Thus additional policies should be implemented to encourage fertility.

A new parsimonious recurrent forecasting model in singular spectrum analysis


Singular spectrum analysis (SSA) is a powerful nonparametric method in the area of time series analysis that has shown its capability in different applications areas. SSA depends on two main choices: the window length L and the number of eigentriples used for grouping r. One of the most important issues when analyzing time series is the forecast of new observations. When using SSA for time series forecasting there are several alternative algorithms, the most widely used being the recurrent forecasting model, which assumes that a given observation can be written as a linear combination of the L−1 previous observations. However, when the window length L is large, the forecasting model is unlikely to be parsimonious. In this paper we propose a new parsimonious recurrent forecasting model that uses an optimal m(

Forecasting house prices in OECD economies


In this paper, we forecast real house price growth of 16 OECD countries using information from domestic macroeconomic indicators and global measures of the housing market. Consistent with the findings for the US housing market, we find that the forecasts from an autoregressive model dominate the forecasts from the random walk model for most of the countries in our sample. More importantly, we find that the forecasts from a bivariate model that includes economically important domestic macroeconomic variables and two global indicators of the housing market significantly improve upon the univariate autoregressive model forecasts. Among all the variables, the mean square forecast error from the model with the country's domestic interest rates has the best performance for most of the countries. The country's income, industrial production, and stock markets are also found to have valuable information about the future movements in real house price growth. There is also some evidence supporting the influence of the global housing price growth in out-of-sample forecasting of real house price growth in these OECD countries.

Regional, individual and political determinants of FOMC members' key macroeconomic forecasts


We study Federal Open Market Committee members' individual forecasts of inflation and unemployment in the period 1992–2004. Our results imply that Governors and Bank presidents forecast differently, with Governors submitting lower inflation and higher unemployment rate forecasts than bank presidents. For Bank presidents we find a regional bias, with higher district unemployment rates being associated with lower inflation and higher unemployment rate forecasts. Bank presidents' regional bias is more pronounced during the year prior to their elections or for nonvoting bank presidents. Career backgrounds or political affiliations also affect individual forecast behavior.

On assessing the relative performance of default predictions


We compare the accuracy of default predictions as, for instance, produced by professional rating agencies. We extend previous results on partial orderings to nonidentical sets of obligors and show that the calibration requirement virtually rules out the possibility of some partial orderings and that the partial ordering based on the ROC curve is most easily achieved in practice. As an example, we show for more than 5,000 firms rated by Moody's and S&P that these ratings cannot be ranked according to their grade distributions given default or nondefault, but that Moody's dominate S&P with respect to the ROC criterion and the Gini curve.

Multi-step forecasting in the presence of breaks


This paper analyzes the relative performance of multi-step AR forecasting methods in the presence of breaks and data revisions. Our Monte Carlo simulations indicate that the type and timing of the break affect the relative accuracy of the methods. The iterated autoregressive method typically produces more accurate point and density forecasts than the alternative multi-step AR methods in unstable environments, especially if the parameters are subject to small breaks. This result holds regardless of whether data revisions add news or reduce noise. Empirical analysis of real-time US output and inflation series shows that the alternative multi-step methods only episodically improve upon the iterated method.

Short-term salmon price forecasting


This study establishes a benchmark for short-term salmon price forecasting. The weekly spot price of Norwegian farmed Atlantic salmon is predicted 1–5 weeks ahead using data from 2007 to 2014. Sixteen alternative forecasting methods are considered, ranging from classical time series models to customized machine learning techniques to salmon futures prices. The best predictions are delivered by k-nearest neighbors method for 1 week ahead; vector error correction model estimated using elastic net regularization for 2 and 3 weeks ahead; and futures prices for 4 and 5 weeks ahead. While the nominal gains in forecast accuracy over a naïve benchmark are small, the economic value of the forecasts is considerable. Using a simple trading strategy for timing the sales based on price forecasts could increase the net profit of a salmon farmer by around 7%.

Comparison of forecasting performances: Does normalization and variance stabilization method beat GARCH(1,1)-type models? Empirical evidence from the stock markets


In this paper, we present a comparison between the forecasting performances of the normalization and variance stabilization method (NoVaS) and the GARCH(1,1), EGARCH(1,1) and GJR-GARCH(1,1) models. Hence the aim of this study is to compare the out-of-sample forecasting performances of the models used throughout the study and to show that the NoVaS method is better than GARCH(1,1)-type models in the context of out-of sample forecasting performance. We study the out-of-sample forecasting performances of GARCH(1,1)-type models and NoVaS method based on generalized error distribution, unlike normal and Student's t-distribution. Also, what makes the study different is the use of the return series, calculated logarithmically and arithmetically in terms of forecasting performance. For comparing the out-of-sample forecasting performances, we focused on different datasets, such as S&P 500, logarithmic and arithmetic BİST 100 return series. The key result of our analysis is that the NoVaS method performs better out-of-sample forecasting performance than GARCH(1,1)-type models. The result can offer useful guidance in model building for out-of-sample forecasting purposes, aimed at improving forecasting accuracy.

Prediction of α-stable GARCH and ARMA-GARCH-M models


The best prediction of generalized autoregressive conditional heteroskedasticity (GARCH) models with α-stable innovations, α-stable power-GARCH models and autoregressive moving average (ARMA) models with GARCH in mean effects (ARMA-GARCH-M) are proposed. We present a sufficient condition for stationarity of α-stable GARCH models. The prediction methods are easy to implement in practice. The proposed prediction methods are applied for predicting future values of the daily SP500 stock market and wind speed data.

Does a lot help a lot? Forecasting stock returns with pooling strategies in a data-rich environment


A variety of recent studies provide a skeptical view on the predictability of stock returns. Empirical evidence shows that most prediction models suffer from a loss of information, model uncertainty, and structural instability by relying on low-dimensional information sets. In this study, we evaluate the predictive ability of various lately refined forecasting strategies, which handle these issues by incorporating information from many potential predictor variables simultaneously. We investigate whether forecasting strategies that (i) combine information and (ii) combine individual forecasts are useful to predict US stock returns, that is, the market excess return, size, value, and the momentum premium. Our results show that methods combining information have remarkable in-sample predictive ability. However, the out-of-sample performance suffers from highly volatile forecast errors. Forecast combinations face a better bias–efficiency trade-off, yielding a consistently superior forecast performance for the market excess return and the size premium even after the 1970s.

Yield curve forecast combinations based on bond portfolio performance


We propose an economically motivated forecast combination strategy in which model weights are related to portfolio returns obtained by a given forecast model. An empirical application based on an optimal mean–variance bond portfolio problem is used to highlight the advantages of the proposed approach with respect to combination methods based on statistical measures of forecast accuracy. We compute average net excess returns, standard deviation, and the Sharpe ratio of bond portfolios obtained with nine alternative yield curve specifications, as well as with 12 different forecast combination strategies. Return-based forecast combination schemes clearly outperformed approaches based on statistical measures of forecast accuracy in terms of economic criteria. Moreover, return-based approaches that dynamically select only the model with highest weight each period and discard all other models delivered even better results, evidencing not only the advantages of trimming forecast combinations but also the ability of the proposed approach to detect best-performing models. To analyze the robustness of our results, different levels of risk aversion and a different dataset are considered.

Prediction-based adaptive compositional model for seasonal time series analysis


In this paper we propose a new class of seasonal time series models, based on a stable seasonal composition assumption. With the objective of forecasting the sum of the next ℓ observations, the concept of rolling season is adopted and a structure of rolling conditional distributions is formulated. The probabilistic properties, estimation and prediction procedures, and the forecasting performance of the model are studied and demonstrated with simulations and real examples.

The impact of parameter and model uncertainty on market risk predictions from GARCH-type models


We study the effect of parameter and model uncertainty on the left-tail of predictive densities and in particular on VaR forecasts. To this end, we evaluate the predictive performance of several GARCH-type models estimated via Bayesian and maximum likelihood techniques. In addition to individual models, several combination methods are considered, such as Bayesian model averaging and (censored) optimal pooling for linear, log or beta linear pools. Daily returns for a set of stock market indexes are predicted over about 13 years from the early 2000s. We find that Bayesian predictive densities improve the VaR backtest at the 1% risk level for single models and for linear and log pools. We also find that the robust VaR backtest exhibited by linear and log pools is better than the backtest of single models at the 5% risk level. Finally, the equally weighted linear pool of Bayesian predictives tends to be the best VaR forecaster in a set of 42 forecasting techniques.

What can we learn from the fifties?


Economists have increasingly elicited probabilistic expectations from survey respondents. Subjective probabilistic expectations show great promise to improve the estimation of structural models of decision making under uncertainty. However, a robust finding in these surveys is an inappropriate heap of responses at “50%,” suggesting that some of these responses are uninformative. The way these 50s are treated in the subsequent analysis is of major importance. Taking the 50s at face value will bias any aggregate statistics. Conversely, deleting them is not appropriate if some of these answers do convey some information. Furthermore, the attention of researchers is so focused on this heap of 50s that they do not consider the possibility that other answers may be uninformative as well. This paper proposes to take a fresh look at these questions using a new method based on weak assumptions to identify the informativeness of an answer. Applying the method to probabilistic expectations of equity returns in three waves of the Survey of Economic Expectations in 1999–2001, I find that: (i) at least 65% of the 50s convey no information at all; (ii) it is the answer most often provided among the answers identified as uninformative; (iii) but even if the 50s are a major contributor to noise, they represent at best 70% of the identified uninformative answers. These findings have various implications for survey design.

Improvement of the Liu-type Shiller estimator for distributed lag models


The problem of multicollinearity produces undesirable effects on ordinary least squares (OLS), Almon and Shiller estimators for distributed lag models. Therefore, we introduce a Liu-type Shiller estimator to deal with multicollinearity for distributed lag models. Moreover, we theoretically compare the predictive performance of the Liu-type Shiller estimator with OLS and the Shiller estimators by the prediction mean square error criterion under the target function. Furthermore, an extensive Monte Carlo simulation study is carried out to evaluate the predictive performance of the Liu-type Shiller estimator.

Mortality effects of temperature changes in the United Kingdom


Temperature changes are known to affect the social and environmental determinants of health in various ways. Consequently, excess deaths as a result of extreme weather conditions may increase over the coming decades because of climate change. In this paper, the relationship between trends in mortality and trends in temperature change (as a proxy) is investigated using annual data and for specified (warm and cold) periods during the year in the UK. A thoughtful statistical analysis is implemented and a new stochastic, central mortality rate model is proposed. The new model encompasses the good features of the Lee and Carter (Journal of the American Statistical Association, 1992, 87: 659–671) model and its recent extensions, and for the very first time includes an exogenous factor which is a temperature-related factor. The new model is shown to provide a significantly better-fitting performance and more interpretable forecasts. An illustrative example of pricing a life insurance product is provided and discussed.

Adjusting for information content when comparing forecast performance


Cross-institutional forecast evaluations may be severely distorted by the fact that forecasts are made at different points in time and therefore with different amounts of information. This paper proposes a method to account for these differences when analyzing an unbalanced panel of forecasts. The method computes the timing effect and the forecaster's ability simultaneously. Monte Carlo simulation demonstrates that evaluations that do not adjust for the differences in information content may be misleading. In addition, the method is applied to a real-world dataset of 10 Swedish forecasters for the period 1999–2015. The results show that the ranking of the forecasters is affected by the proposed adjustment.

PARX model for football match predictions


We propose an innovative approach to model and predict the outcome of football matches based on the Poisson autoregression with exogenous covariates (PARX) model recently proposed by Agosto, Cavaliere, Kristensen, and Rahbek (Journal of Empirical Finance, 2016, 38(B), 640–663). We show that this methodology is particularly suited to model the goal distribution of a football team and provides a good forecast performance that can be exploited to develop a profitable betting strategy. This paper improves the strand of literature on Poisson-based models, by proposing a specification able to capture the main characteristics of goal distribution. The betting strategy is based on the idea that the odds proposed by the market do not reflect the true probability of the match because they may also incorporate the betting volumes or strategic price settings in order to exploit betters' biases. The out-of-sample performance of the PARX model is better than the reference approach by Dixon and Coles (Applied Statistics, 1997, 46(2), 265–280). We also evaluate our approach in a simple betting strategy, which is applied to English football Premier League data for the 2013–2014, 2014–2015, and 2015–2016 seasons. The results show that the return from the betting strategy is larger than 30% in most of the cases considered and may even exceed 100% if we consider an alternative strategy based on a predetermined threshold, which makes it possible to exploit the inefficiency of the betting market.

Forecasting intraday S&P 500 index returns: A functional time series approach


Financial data often take the form of a collection of curves that can be observed sequentially over time; for example, intraday stock price curves and intraday volatility curves. These curves can be viewed as a time series of functions that can be observed on equally spaced and dense grids. Owing to the so-called curse of dimensionality, the nature of high-dimensional data poses challenges from a statistical perspective; however, it also provides opportunities to analyze a rich source of information, so that the dynamic changes of short time intervals can be better understood. In this paper, we consider forecasting a time series of functions and propose a number of statistical methods that can be used to forecast 1-day-ahead intraday stock returns. As we sequentially observe new data, we also consider the use of dynamic updating in updating point and interval forecasts for achieving improved accuracy. The forecasting methods were validated through an empirical study of 5-minute intraday S&P 500 index returns.

The informational content of unconventional monetary policy on precious metal markets


This paper investigates the informational content of unconventional monetary policies and its effect on commodity markets, adopting a nonlinear approach for modeling volatility. The main question addressed is how the Bank of England, Bank of Japan, and European Central Bank's (ECB's) announcements concerning monetary easing affect two major commodities: gold and silver. Our empirical evidence based on daily and high-frequency data suggests that relevant information causes ambiguous valuation adjustments as well as stabilization or destabilization effects. Specifically, there is strong evidence that the Japanese Central Bank strengthens the precious metal markets by increasing their returns and by causing stabilization effects, in contrast to the ECB, which has opposite results, mainly due to the heterogeneous expectations of investors within these markets. These asymmetries across central banks' effects on gold and silver risk–return profile imply that the ECB unconventional monetary easing informational content opposes its stated mission, adding uncertainty in precious metals markets.

Forecasting US interest rates and business cycle with a nonlinear regime switching VAR model


This paper introduces a regime switching vector autoregressive model with time-varying regime probabilities, where the regime switching dynamics is described by an observable binary response variable predicted simultaneously with the variables subject to regime changes. Dependence on the observed binary variable distinguishes the model from various previously proposed multivariate regime switching models, facilitating a handy simulation-based multistep forecasting method. An empirical application shows a strong bidirectional predictive linkage between US interest rates and NBER business cycle recession and expansion periods. Due to the predictability of the business cycle regimes, the proposed model yields superior out-of-sample forecasts of the US short-term interest rate and the term spread compared with the linear and nonlinear vector autoregressive (VAR) models, including the Markov switching VAR model.

Modelling and Trading the English and German Stock Markets with Novelty Optimization Techniques


The motivation for this paper was the introduction of novel short-term models to trade the FTSE 100 and DAX 30 exchange-traded funds (ETF) indices. There are major contributions in this paper which include the introduction of an input selection criterion when utilizing an expansive universe of inputs, a hybrid combination of partial swarm optimizer (PSO) with radial basis function (RBF) neural networks, the application of a PSO algorithm to a traditional autoregressive moving model (ARMA), the application of a PSO algorithm to a higher-order neural network and, finally, the introduction of a multi-objective algorithm to optimize statistical and trading performance when trading an index. All the machine learning-based methodologies and the conventional models are adapted and optimized to model the index. A PSO algorithm is used to optimize the weights in a traditional RBF neural network, in a higher-order neural network (HONN) and the AR and MA terms of an ARMA model. In terms of checking the statistical and empirical accuracy of the novel models, we benchmark them with a traditional HONN, with an ARMA, with a moving average convergence/divergence model (MACD) and with a naïve strategy. More specifically, the trading and statistical performance of all models is investigated in a forecast simulation of the FTSE 100 and DAX 30 ETF time series over the period January 2004 to December 2015 using the last 3 years for out-of-sample testing. Finally, the empirical and statistical results indicate that the PSO-RBF model outperforms all other examined models in terms of trading accuracy and profitability, even with mixed inputs and with only autoregressive inputs. Copyright © 2016 John Wiley & Sons, Ltd.

Forecasting the Daily Time-Varying Beta of European Banks During the Crisis Period: Comparison Between GARCH Models and the Kalman Filter


This intention of this paper is to empirically forecast the daily betas of a few European banks by means of four generalized autoregressive conditional heteroscedasticity (GARCH) models and the Kalman filter method during the pre-global financial crisis period and the crisis period. The four GARCH models employed are BEKK GARCH, DCC GARCH, DCC-MIDAS GARCH and Gaussian-copula GARCH. The data consist of daily stock prices from 2001 to 2013 from two large banks each from Austria, Belgium, Greece, Holland, Ireland, Italy, Portugal and Spain. We apply the rolling forecasting method and the model confidence sets (MCS) to compare the daily forecasting ability of the five models during one month of the pre-crisis (January 2007) and the crisis (January 2013) periods. Based on the MCS results, the BEKK proves the best model in the January 2007 period, and the Kalman filter overly outperforms the other models during the January 2013 period. Results have implications regarding the choice of model during different periods by practitioners and academics. Copyright © 2016 John Wiley & Sons, Ltd.

Long Memory of Financial Time Series and Hidden Markov Models with Time-Varying Parameters


Hidden Markov models are often used to model daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time-varying behavior have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time varying. It is shown that a two-state Gaussian hidden Markov model with time-varying parameters is able to reproduce the long memory of squared daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time-varying behavior of the parameters also leads to improved one-step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations. Copyright © 2016 John Wiley & Sons, Ltd.

Exploiting Spillovers to Forecast Crashes


We develop Hawkes models in which events are triggered through self-excitation as well as cross-excitation. We examine whether incorporating cross-excitation improves the forecasts of extremes in asset returns compared to only self-excitation. The models are applied to US stocks, bonds and dollar exchange rates. We predict the probability of crashes in the series and the value at risk (VaR) over a period that includes the financial crisis of 2008 using a moving window. A Lagrange multiplier test suggests the presence of cross-excitation for these series. Out-of-sample, we find that the models that include spillover effects forecast crashes and the VaR significantly more accurately than the models without these effects. Copyright © 2016 John Wiley & Sons, Ltd.

The US Dollar/Euro Exchange Rate: Structural Modeling and Forecasting During the Recent Financial Crises


The paper investigates the determinants of the US dollar/euro within the framework of the asset pricing theory of exchange rate determination, which posits that current exchange rate fluctuations are determined by the entire path of current and future revisions in expectations about fundamentals. In this perspective, we innovate by conditioning on Fama–French and Carhart risk factors, which directly measures changing market expectations about the economic outlook, on new financial condition indexes and macroeconomic variables. The macro-finance augmented econometric model has a remarkable in-sample and out-of-sample predictive ability, largely outperforming a standard autoregressive specification. We also document a stable relationship between the US dollar/euro Carhart momentum conditional correlation (CCW) and the euro area business cycle. CCW signals a progressive weakening in economic conditions since June 2014, consistent with the scattered recovery from the sovereign debt crisis and the new Greek solvency crisis exploded in late spring/early summer 2015. Copyright © 2016 John Wiley & Sons, Ltd.

A Comparison of the Forecasting Ability of Immediate Price Impact Models


As a consequence of recent technological advances and the proliferation of algorithmic and high-frequency trading, the cost of trading in financial markets has irrevocably changed. One important change, known as price impact, relates to how trading affects prices. Price impact represents the largest cost associated with trading. Forecasting price impact is very important as it can provide estimates of trading profits after costs and also suggest optimal execution strategies. Although several models have recently been developed which may forecast the immediate price impact of individual trades, limited work has been done to compare their relative performance. We provide a comprehensive performance evaluation of these models and test for statistically significant outperformance amongst candidate models using out-of-sample forecasts. We find that normalizing price impact by its average value significantly enhances the performance of traditional non-normalized models as the normalization factor captures some of the dynamics of price impact. Copyright © 2016 John Wiley & Sons, Ltd.

Nonlinearities in the CAPM: Evidence from Developed and Emerging Markets


This paper examines the forecasting ability of the nonlinear specifications of the market model. We propose a conditional two-moment market model with a time-varying systematic covariance (beta) risk in the form of a mean reverting process of the state-space model via the Kalman filter algorithm. In addition, we account for the systematic component of co-skewness and co-kurtosis by considering higher moments. The analysis is implemented using data from the stock indices of several developed and emerging stock markets. The empirical findings favour the time-varying market model approaches, which outperform linear model specifications both in terms of model fit and predictability. Precisely, higher moments are necessary for datasets that involve structural changes and/or market inefficiencies which are common in most of the emerging stock markets. Copyright © 2016 John Wiley & Sons, Ltd.

The importance of time-varying volatility and country interactions in forecasting economic activity


This paper examines the relative importance of allowing for time-varying volatility and country interactions in a forecast model of economic activity. Allowing for these issues is done by augmenting autoregressive models of growth with cross-country weighted averages of growth and the generalized autoregressive conditional heteroskedasticity framework. The forecasts are evaluated using statistical criteria through point and density forecasts, and an economic criterion based on forecasting recessions. The results show that, compared to an autoregressive model, both components improve forecast ability in terms of point and density forecasts, especially one-period-ahead forecasts, but that the forecast ability is not stable over time. The random walk model, however, still dominates in terms of forecasting recessions.

Forecast robustness in macroeconometric models


This paper investigates potential invariance of mean forecast errors to structural breaks in the data generating process. From the general forecasting literature, such robustness is expected to be a rare occurrence. With the aid of a stylized macro model we are able to identify some economically relevant cases of robustness and to interpret them economically. We give an interpretation in terms of co-breaking. The analytical results resound well with the forecasting record of a medium-scale econometric model of the Norwegian economy.

Forecasting key US macroeconomic variables with a factor-augmented Qual VAR


In this paper, we first extract factors from a monthly dataset of 130 macroeconomic and financial variables. These extracted factors are then used to construct a factor-augmented qualitative vector autoregressive (FA-Qual VAR) model to forecast industrial production growth, inflation, the Federal funds rate, and the term spread based on a pseudo out-of-sample recursive forecasting exercise over an out-of-sample period of 1980:1 to 2014:12, using an in-sample period of 1960:1 to 1979:12. Short-, medium-, and long-run horizons of 1, 6, 12, and 24 months ahead are considered. The forecast from the FA-Qual VAR is compared with that of a standard VAR model, a Qual VAR model, and a factor-augmented VAR (FAVAR). In general, we observe that the FA-Qual VAR tends to perform significantly better than the VAR, Qual VAR and FAVAR (barring some exceptions relative to the latter). In addition, we find that the Qual VARs are also well equipped in forecasting probability of recessions when compared to probit models.

Mincer–Zarnowitz quantile and expectile regressions for forecast evaluations under aysmmetric loss functions


Forecasts are pervasive in all areas of applications in business and daily life. Hence evaluating the accuracy of a forecast is important for both the generators and consumers of forecasts. There are two aspects in forecast evaluation: (a) measuring the accuracy of past forecasts using some summary statistics, and (b) testing the optimality properties of the forecasts through some diagnostic tests. On measuring the accuracy of a past forecast, this paper illustrates that the summary statistics used should match the loss function that was used to generate the forecast. If there is strong evidence that an asymmetric loss function has been used in the generation of a forecast, then a summary statistic that corresponds to that asymmetric loss function should be used in assessing the accuracy of the forecast instead of the popular root mean square error or mean absolute error. On testing the optimality of the forecasts, it is demonstrated how the quantile regressions set in the prediction–realization framework of Mincer and Zarnowitz (in J. Mincer (Ed.), Economic Forecasts and Expectations: Analysis of Forecasting Behavior and Performance (pp. 14–20), 1969) can be used to recover the unknown parameter that controls the potentially asymmetric loss function used in generating the past forecasts. Finally, the prediction–realization framework is applied to the Federal Reserve's economic growth forecast and forecast sharing in a PC manufacturing supply chain. It is found that the Federal Reserve values overprediction approximately 1.5 times more costly than underprediction. It is also found that the PC manufacturer weighs positive forecast errors (under forecasts) about four times as costly as negative forecast errors (over forecasts).

Modeling and forecasting realized volatility in German–Austrian continuous intraday electricity prices


This paper uses high-frequency continuous intraday electricity price data from the EPEX market to estimate and forecast realized volatility. Three different jump tests are used to break down the variation into jump and continuous components using quadratic variation theory. Several heterogeneous autoregressive models are then estimated for the logarithmic and standard deviation transformations. Generalized autoregressive conditional heteroskedasticity (GARCH) structures are included in the error terms of the models when evidence of conditional heteroskedasticity is found. Model selection is based on various out-of-sample criteria. Results show that decomposition of realized volatility is important for forecasting and that the decision whether to include GARCH-type innovations might depend on the transformation selected. Finally, results are sensitive to the jump test used in the case of the standard deviation transformation.

Understanding algorithm aversion: When is advice from automation discounted?


Forecasting advice from human advisors is often utilized more than advice from automation. There is little understanding of why “algorithm aversion” occurs, or specific conditions that may exaggerate it. This paper first reviews literature from two fields—interpersonal advice and human–automation trust—that can inform our understanding of the underlying causes of the phenomenon. Then, an experiment is conducted to search for these underlying causes. We do not replicate the finding that human advice is generally utilized more than automated advice. However, after receiving bad advice, utilization of automated advice decreased significantly more than advice from humans. We also find that decision makers describe themselves as having much more in common with human than automated advisors despite there being no interpersonal relationship in our study. Results are discussed in relation to other findings from the forecasting and human–automation trust fields and provide a new perspective on what causes and exaggerates algorithm aversion.

Robust estimation of conditional variance of time series using density power divergences


Suppose Zt is the square of a time series Yt whose conditional mean is zero. We do not specify a model for Yt, but assume that there exists a p×1 parameter vector Φ such that the conditional distribution of Zt|Zt−1 is the same as that of Zt|ΦTZt−1, where Zt−1=(Zt−1,…,Zt−p)T for some lag p⩾1. Consequently, the conditional variance of Yt is some function of ΦTZt−1. To estimate Φ, we propose a robust estimation methodology based on density power divergences (DPD) indexed by a tuning parameter α∈[0,1], which yields a continuum of estimators, {Φ^α;α∈[0,1]}, where α controls the trade-off between robustness and efficiency of the DPD estimators. For each α, Φ^α is shown to be strongly consistent. We develop data-dependent criteria for the selection of optimal α and lag p in practice. We illustrate the usefulness of our DPD methodology via simulation studies for ARCH-type models, where the errors are drawn from a gross-error contamination model and the conditional variance is a linear and/or nonlinear function of ΦTZt−1. Furthermore, we analyze the Chicago Board Options Exchange Dow Jones volatility index data and show that our DPD approach yields viable models for the conditional variance, which are as good as, or superior to, ARCH/GARCH models and two other divergence-based models in terms of in-sample and out-of-sample forecasts.

Modeling and forecasting aggregate stock market volatility in unstable environments using mixture innovation regressions


We perform Bayesian model averaging across different regressions selected from a set of predictors that includes lags of realized volatility, financial and macroeconomic variables. In our model average, we entertain different channels of instability by either incorporating breaks in the regression coefficients of each individual model within our model average, breaks in the conditional error variance, or both. Changes in these parameters are driven by mixture distributions for state innovations (MIA) of linear Gaussian state-space models. This framework allows us to compare models that assume small and frequent as well as models that assume large but rare changes in the conditional mean and variance parameters. Results using S&P 500 monthly and quarterly realized volatility data from 1960 to 2014 suggest that Bayesian model averaging in combination with breaks in the regression coefficients and the error variance through MIA dynamics generates statistically significantly more accurate forecasts than the benchmark autoregressive model. However, compared to a MIA autoregression with breaks in the regression coefficients and the error variance, we fail to provide any drastic improvements.