Subscribe: Journal of Applied Econometrics
Added By: Feedage Forager Feedage Grade B rated
Language: English
based  data  effect  effects  empirical  estimation  factor  find  model  models  paper  results  returns  selection  show 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: Journal of Applied Econometrics

Journal of Applied Econometrics

Wiley Online Library : Journal of Applied Econometrics

Published: 2017-09-01T00:00:00-05:00


Measuring crisis risk using conditional copulas: An empirical analysis of the 2008 shipping crisis


The shipping crisis starting in 2008 was characterized by sharply decreasing freight rates and sharply increasing financing costs. We analyze the dependence structure of these two risk factors employing a conditional copula model. As conditioning factors we use the supply and demand of seaborne transportation. We find that crisis risk strongly increased already about 1 year before the actual crisis outburst and that the shipping crisis was predominantly driven by an oversupply of transport capacity. Therefore, market participants could have prevented or alleviated the consequences of the crisis by reducing the ordering and financing of new vessels.

Time series copulas for heteroskedastic data


We propose parametric copulas that capture serial dependence in stationary heteroskedastic time series. We suggest copulas for first-order Markov series, and then extend them to higher orders and multivariate series. We derive the copula of a volatility proxy, based on which we propose new measures of volatility dependence, including co-movement and spillover in multivariate series. In general, these depend upon the marginal distributions of the series. Using exchange rate returns, we show that the resulting copula models can capture their marginal distributions more accurately than univariate and multivariate generalized autoregressive conditional heteroskedasticity models, and produce more accurate value-at-risk forecasts.

An efficient Bayesian approach to multiple structural change in multivariate time series


This paper provides a feasible approach to estimation and forecasting of multiple structural breaks for vector autoregressions and other multivariate models. Owing to conjugate prior assumptions we obtain a very efficient sampler for the regime allocation variable. A new hierarchical prior is introduced to allow for learning over different structural breaks. The model is extended to independent breaks in regression coefficients and the volatility parameters. Two empirical applications show the improvements the model has over benchmarks. In a macro application with seven variables we empirically demonstrate the benefits from moving from a multivariate structural break model to a set of univariate structural break models to account for heterogeneous break patterns across data series.

Business, housing, and credit cycles


We use multivariate unobserved components models to estimate trend and cyclical components in gross domestic product (GDP), credit volumes, and house prices for the USA and the five largest European economies. With the exception of Germany, we find large and long cycles in credit and house prices, which are highly correlated with a medium-term component in GDP cycles. Differences across countries in the length and size of cycles appear to be related to the properties of national housing markets. The precision of pseudo real-time estimates of credit and house price cycles is roughly comparable to that of GDP cycles.

Improving Markov switching models using realized variance


This paper proposes a class of models that jointly model returns and ex post variance measures under a Markov switching framework. Both univariate and multivariate return versions of the model are introduced. Estimation can be conducted under a fixed dimension state space or an infinite one. The proposed models can be seen as nonlinear common factor models subject to Markov switching and are able to exploit the information content in both returns and ex post volatility measures. Applications to equity returns compare the proposed models to existing alternatives. The empirical results show that the joint models improve density forecasts for returns and point predictions of return variance. Using the information in ex post volatility measures can increase the precision of parameter estimates, sharpen the inference on the latent state variable, and improve portfolio decisions.

Measuring the diffusion of housing prices across space and over time: Replication and further evidence


Brady (Journal of Applied Econometrics, 2011, 26(2), 213–231) studies how fast and how long a change in housing prices in one region affects its neighbors by estimating the impulse response functions using a spatial autoregressive model (SAR). This paper replicates Brady's empirical results, but reports different SAR test statistics. Additional robustness checks are conducted by analyzing three different housing price indexes covering a more extensive period. Analysis shows that the model specifications and model estimates vary with the housing price indexes.

Self-employment among women: Do children matter more than we previously thought?


This paper presents an estimation approach that addresses the problems of sample selection and endogeneity of fertility decisions when estimating the effect of young children on women's self-employment. Using data from the National Longitudinal Survey of Youth 1979, 1982–2006, we find that ignoring self-selection and endogeneity leads to underestimating the effect of young children. Once both sources of biases are accounted for, the estimated effect of young children roughly triples when compared to uncorrected results. This finding is robust to several changes in the specification and to the use of a different dataset.

Do contractionary monetary policy shocks expand shadow banking?


Using VAR models for the USA, we find that a contractionary monetary policy shock has a persistent negative impact on the level of commercial bank assets, but increases the assets of shadow banks and securitization activity. To explain this “waterbed” effect, we propose a standard New Keynesian model featuring both commercial and shadow banks, and we show that the model comes close to explaining the empirical results. Our findings cast doubt on the idea that monetary policy can usefully “get in all the cracks” of the financial sector in a uniform way.

Comparing cross-country estimates of Lorenz curves using a Dirichlet distribution across estimators and datasets


Chotikapanich and Griffiths (Journal of Business and Economic Statistics, 2002, 20(2), 290–295) introduced the Dirichlet distribution to the estimation of Lorenz curves. This distribution naturally accommodates the proportional nature of income share data and the dependence structure between the shares. Chotikapanich and Griffiths fit a family of five Lorenz curves to one year of Swedish and Brazilian income share data using unconstrained maximum likelihood and unconstrained nonlinear least squares. We attempt to replicate the authors' results and extend their analyses using both constrained estimation techniques and five additional years of data. We successfully replicate a majority of the authors' results and find that some of their main qualitative conclusions also hold using our constrained estimators and additional data.

Binary response panel data models with sample selection and self-selection


We consider estimating binary response models on an unbalanced panel, where the outcome of the dependent variable may be missing due to nonrandom selection, or there is self-selection into a treatment. In the present paper, we first consider estimation of sample selection models and treatment effects using a fully parametric approach, where the error distribution is assumed to be normal in both primary and selection equations. Arbitrary time dependence in errors is permitted. Estimation of both coefficients and partial effects, as well as tests for selection bias, are discussed. Furthermore, we consider a semiparametric estimator of binary response panel data models with sample selection that is robust to a variety of error distributions. The estimator employs a control function approach to account for endogenous selection and permits consistent estimation of scaled coefficients and relative effects.

Identifying contagion


Identifying contagion effects during periods of financial crisis is known to be complicated by the changing volatility of asset returns during periods of stress. To untangle this we propose a GARCH (generalized autoregressive conditional heteroskedasticity) common features approach, where systemic risk emerges from a common factor source (or indeed multiple factor sources) with contagion evident through possible changes in the factor loadings relating to the common factor(s). Within a portfolio mimicking factor framework this can be identified using moment conditions. We use this framework to identify contagion in three illustrations involving both single and multiple factor specifications: to the Asian currency markets in 1997–1998, to US sectoral equity indices in 2007–2009 and to the CDS (credit default swap) market during the European sovereign debt crisis of 2010–2013. The results reveal the extent to which contagion effects may be masked by not accounting for the sources of changed volatility apparent in simple measures such as correlation.

Sequentially testing polynomial model hypotheses using power transforms of regressors


We provide a methodology for testing a polynomial model hypothesis by generalizing the approach and results of Baek, Cho, and Phillips (Journal of Econometrics, 2015, 187, 376–384; BCP), which test for neglected nonlinearity using power transforms of regressors against arbitrary nonlinearity. We use the BCP quasi-likelihood ratio test and deal with the new multifold identification problem that arises under the null of the polynomial model. The approach leads to convenient asymptotic theory for inference, has omnibus power against general nonlinear alternatives, and allows estimation of an unknown polynomial degree in a model by way of sequential testing, a technique that is useful in the application of sieve approximations. Simulations show good performance in the sequential test procedure in both identifying and estimating unknown polynomial order. The approach, which can be used empirically to test for misspecification, is applied to a Mincer (Journal of Political Economy, 1958, 66, 281–302; Schooling, Experience and Earnings, Columbia University Press, 1974) equation using data from Card (in Christofides, Grant, and Swidinsky (Eds.), Aspects of Labour Market Behaviour: Essays in Honour of John Vanderkamp, University of Toronto Press, 1995, 201-222) and Bierens and Ginther (Empirical Economics, 2001, 26, 307–324). The results confirm that the standard Mincer log earnings equation is readily shown to be misspecified. The applications consider different datasets and examine the impact of nonlinear effects of experience and schooling on earnings, allowing for flexibility in the respective polynomial representations.

A sequential Monte Carlo approach to inference in multiple-equation Markov-switching models


Vector autoregressions with Markov-switching parameters (MS-VARs) offer substantial gains in data fit over VARs with constant parameters. However, Bayesian inference for MS-VARs has remained challenging, impeding their uptake for empirical applications. We show that sequential Monte Carlo (SMC) estimators can accurately estimate MS-VAR posteriors. Relative to multi-step, model-specific MCMC routines, SMC has the advantages of generality, parallelizability, and freedom from reliance on particular analytical relationships between prior and likelihood. We use SMC's flexibility to demonstrate that model selection among MS-VARs can be highly sensitive to the choice of prior.

Estimating global bank network connectedness


We use LASSO methods to shrink, select, and estimate the high-dimensional network linking the publicly traded subset of the world's top 150 banks, 2003–2014. We characterize static network connectedness using full-sample estimation and dynamic network connectedness using rolling-window estimation. Statically, we find that global bank equity connectedness has a strong geographic component, whereas country sovereign bond connectedness does not. Dynamically, we find that equity connectedness increases during crises, with clear peaks during the Great Financial Crisis and each wave of the subsequent European Debt Crisis, and with movements coming mostly from changes in cross-country as opposed to within-country bank linkages.

Multivariate choices and identification of social interactions


This paper considers the identification of social interaction effects in the context of multivariate choices. First, we generalize the theoretical social interaction model to allow individuals to make interdependent choices in different activities. Based on the theoretical model, we propose a simultaneous equation network model and discuss the identification of social interaction effects in the econometric model. We also provide an empirical example to show the empirical salience of this model. Using the Add Health data, we find that a student's academic performance is not only affected by academic performance of his peers but also affected by screen-related activities of his peers.

Normalized CES supply systems: Replication of Klump, McAdam, and Willman (2007)


The analysis of Klump, McAdam, and Willman (Review of Economics and Statistics, 2007, 89, 183–192) is replicated using alternative software. Their results are verified substantively and, in large measure, numerically. Contributions include a more explicit consideration of the nested testing structure than has appeared previously, and the appropriate means of imposing and testing the special case of logarithmic growth in technology. Also, plots of the likelihood serve to emphasize that maxima in the neighborhood of a unitary elasticity of substitution are often a spurious artifact of the singularity of the model at this point, of which empirical researchers should beware.

Estimating the effects of the minimum wage in a developing country: A density discontinuity design approach


This paper proposes a framework to identify the effects of the minimum wage on the joint distribution of sector and wage in a developing country. I show how the discontinuity of the wage distribution around the minimum wage identifies the extent of noncompliance with the minimum wage policy, and how the conditional probability of sector given wage recovers the relationship between latent sector and wages. I apply the method in the “PNAD,” a nationwide representative Brazilian cross-sectional dataset for the years 2001–2009. The results indicate that the size of the informal sector is increased by around 39% compared to what would prevail in the absence of the minimum wage, an effect attributable to (i) unemployment effects of the minimum wage on the formal sector and (ii) movements of workers from the formal to the informal sector as a response to the policy.

Estimating the distribution of welfare effects using quantiles


This paper proposes a framework to model welfare effects that are associated with a price change in a population of heterogeneous consumers. The framework is similar to that of Hausman and Newey (Econometrica, 1995, 63, 1445–1476), but allows for more general forms of heterogeneity. Individual demands are characterized by a general model that is nonparametric in the regressors, as well as monotonic in unobserved heterogeneity, allowing us to identify the distribution of welfare effects. We first argue why a decision maker should care about this distribution. Then we establish constructive identification, propose a sample counterparts estimator, and analyze its large-sample properties. Finally, we apply all concepts to measuring the heterogeneous effect of a change of gasoline price using US consumer data and find very substantial differences in individual effects across quantiles.

Predicting crude oil prices: Replication of the empirical results in “What do we learn from the price of crude oil?”


In addition to their theoretical analysis of the joint determination of oil futures prices and oil spot prices, Alquist and Kilian (Journal of Applied Econometrics, 2010, 25(4), 539–573) compare the out-of-sample accuracy of the random walk forecast with that of forecasts based on oil futures prices and other predictors. The results of my replication exercise are very similar to the original forecast accuracy results, but the relative accuracy of the random walk forecast and the futures-based forecast changes when the sample is extended to August 2016, consistent with the results of several other recent studies by Kilian and co-authors.

Difference-in-differences when the treatment status is observed in only one period


This paper considers the difference-in-differences (DID) method when the data come from repeated cross-sections and the treatment status is observed either before or after the implementation of a program. We propose a new method that point-identifies the average treatment effect on the treated (ATT) via a DID method when there is at least one proxy variable for the latent treatment. Key assumptions are the stationarity of the propensity score conditional on the proxy and an exclusion restriction that the proxy must satisfy with respect to the change in average outcomes over time conditional on the true treatment status. We propose a generalized method of moments estimator for the ATT and we show that the associated overidentification test can be used to test our key assumptions. The method is used to evaluate JUNTOS, a Peruvian conditional cash transfer program. We find that the program significantly increased the demand for health inputs among children and women of reproductive age.

Weak-instrument robust inference for two-sample instrumental variables regression


Instrumental variable (IV) methods for regression are well established. More recently, methods have been developed for statistical inference when the instruments are weakly correlated with the endogenous regressor, so that estimators are biased and no longer asymptotically normally distributed. This paper extends such inference to the case where two separate samples are used to implement instrumental variables estimation. We also relax the restrictive assumptions of homoskedastic error structure and equal moments of exogenous covariates across two samples commonly employed in the two-sample IV literature for strong IV inference. Monte Carlo experiments show good size properties of the proposed tests regardless of the strength of the instruments. We apply the proposed methods to two seminal empirical studies that adopt the two-sample IV framework.

Decomposing economic mobility transition matrices


We present a decomposition method for transition matrices to identify forces driving the persistence of economic status across generations. The method decomposes differences between an estimated transition matrix and a benchmark transition matrix into portions attributable to differences in characteristics between individuals from different households (a composition effect) and portions attributable to differing returns to these characteristics (a structure effect). A detailed decomposition based on copula theory further decomposes the composition effect into portions attributable to specific characteristics and their interactions. To examine potential drivers of economic persistence in the USA, we apply the method to white males from the 1979 US National Longitudinal Survey of Youth. Depending on the transition matrix entry of interest, differing characteristics between sons from different households explain between 40% and 70% of observed income persistence, with differing returns for these characteristics explaining the remaining gap. Further, detailed decompositions reveal significant heterogeneity in the role played by specific characteristics (e.g., education) across the income distribution.

The evolution of scale economies in US banking


Continued consolidation of the US banking industry and a general increase in the size of banks have prompted some policymakers to consider policies that discourage banks from getting larger, including explicit caps on bank size. However, limits on the size of banks could entail economic costs if they prevent banks from achieving economies of scale. This paper presents new estimates of returns to scale for US banks based on nonparametric, local-linear estimation of bank cost, revenue, and profit functions. We report estimates for both 2006 and 2015 to compare returns to scale some 7 years after the financial crisis and 5 years after enactment of the Dodd–Frank Act with returns to scale before the crisis. We find that a high percentage of banks faced increasing returns to scale in cost in both years, including most of the 10 largest bank holding companies. Also, while returns to scale in revenue and profit vary more across banks, we find evidence that the largest four banks operate under increasing returns to scale.

Loss functions for predicted click-through rates in auctions for online advertising


We characterize the optimal loss functions for predicted click-through rates in auctions for online advertising. Whereas standard loss functions such as mean squared error or log likelihood severely penalize large mispredictions while imposing little penalty on smaller mistakes, a loss function reflecting the true economic loss from mispredictions imposes significant penalties for small mispredictions and only slightly larger penalties on large mispredictions. We illustrate that when the model is misspecified using such a loss function can improve economic efficiency, but the efficiency gain is likely to be small.

Efficient estimation of Bayesian VARMAs with time-varying coefficients


Empirical work in macroeconometrics has been mostly restricted to using vector autoregressions (VARs), even though there are strong theoretical reasons to consider general vector autoregressive moving averages (VARMAs). A number of articles in the last two decades have conjectured that this is because estimation of VARMAs is perceived to be challenging and proposed various ways to simplify it. Nevertheless, VARMAs continue to be largely dominated by VARs, particularly in terms of developing useful extensions. We address these computational challenges with a Bayesian approach. Specifically, we develop a Gibbs sampler for the basic VARMA, and demonstrate how it can be extended to models with time-varying vector moving average (VMA) coefficients and stochastic volatility. We illustrate the methodology through a macroeconomic forecasting exercise. We show that in a class of models with stochastic volatility, VARMAs produce better density forecasts than VARs, particularly for short forecast horizons.

An endogenously clustered factor approach to international business cycles


Factor models have become useful tools for studying international business cycles. Block factor models can be especially useful as the zero restrictions on the loadings of some factors may provide some economic interpretation of the factors. These models, however, require the econometrician to predefine the blocks, leading to potential misspecification. In Monte Carlo experiments, we show that even a small misspecification can lead to substantial declines in fit. We propose an alternative model in which the blocks are chosen endogenously. The model is estimated in a Bayesian framework using a hierarchical prior, which allows us to incorporate series-level covariates that may influence and explain how the series are grouped. Using international business cycle data, we find our country clusters differ in important ways from those identified by geography alone. In particular, we find that similarities in institutions (e.g., legal systems, language diversity) may be just as important as physical proximity for analyzing business cycle comovements.

Doubly robust uniform confidence band for the conditional average treatment effect function


In this paper, we propose a doubly robust method to estimate the heterogeneity of the average treatment effect with respect to observed covariates of interest. We consider a situation where a large number of covariates are needed for identifying the average treatment effect but the covariates of interest for analyzing heterogeneity are of much lower dimension. Our proposed estimator is doubly robust and avoids the curse of dimensionality. We propose a uniform confidence band that is easy to compute, and we illustrate its usefulness via Monte Carlo experiments and an application to the effects of smoking on birth weights.

Combining density forecasts using focused scoring rules


We investigate the added value of combining density forecasts focused on a specific region of support. We develop forecast combination schemes that assign weights to individual predictive densities based on the censored likelihood scoring rule and the continuous ranked probability scoring rule (CRPS) and compare these to weighting schemes based on the log score and the equally weighted scheme. We apply this approach in the context of measuring downside risk in equity markets using recently developed volatility models, including HEAVY, realized GARCH and GAS models, applied to daily returns on the S&P 500, DJIA, FTSE and Nikkei indexes from 2000 until 2013. The results show that combined density forecasts based on optimizing the censored likelihood scoring rule significantly outperform pooling based on equal weights, optimizing the CRPS or log scoring rule. In addition, 99% Value-at-Risk estimates improve when weights are based on the censored likelihood scoring rule.

Estimating the economic costs of organized crime by synthetic control methods


The economic costs of organized crime have been estimated for the case of southern Italy by Pinotti (Economic Journal 2015; 125, F203–F232, 2015): using synthetic control methods, he finds that, due to the advent of the Italian Mafia in the regions Apulia and Basilicata, GDP per capita dropped by 16%. Replicating this study in a narrow sense by estimating the same model with the same data, but using different software implementations, we observe minor differences stemming from the different implementations. By identifying the correct implementation, we find that the loss in GDP per capita due to the presence of the Mafia has been slightly overestimated.

Nonparametric methods and local-time-based estimation for dynamic power law distributions


This paper introduces nonparametric econometric methods that characterize general power law distributions under basic stability conditions. These methods extend the literature on power laws in the social sciences in several directions. First, we show that any stationary distribution in a random growth setting is shaped entirely by two factors: the idiosyncratic volatilities and reversion rates (a measure of cross-sectional mean reversion) for different ranks in the distribution. This result is valid regardless of how growth rates and volatilities vary across different economic agents, and hence applies to Gibrat's law and its extensions. Second, we present techniques to estimate these two factors using panel data. Third, we describe how our results imply predictability as higher-ranked processes must on average grow more slowly than lower-ranked processes. We employ our empirical methods using data on commodity prices and show that our techniques accurately describe the empirical distribution of relative commodity prices. We also show that rank-based out-of-sample forecasts of future commodity prices outperform random-walk forecasts at a 1-month horizon.

Economies of diversification in the US credit union sector


Significant scale economies have been recently cited to rationalize a dramatic growth in the US retail credit union sector over the past few decades. In this paper, we explore another plausible supply-side explanation for the growth of the industry, namely economies of diversification. We focus on the fact that credit unions differ among themselves in the range of financial services they offer to their members. Since larger credit unions tend to offer a more diversified financial service menu than credit unions of a smaller size, the incentive to grow in size may be fueled not only by present scale economies but also by economies of diversification. This paper provides the first robust estimates of such economies of diversification for the credit union sector. We estimate a flexible semiparametric smooth coefficient quantile panel data model with correlated effects that is capable of accommodating a four-way heterogeneity among credit unions. Our results indicate the presence of non-negligible economies of diversification in the industry. We find that as many as 27–91% (depending on the type and the cost quantile) of diversified credit unions enjoy substantial economies of diversification; the cost of most remaining credit unions is invariant to the scope of services. We also find overwhelming evidence of increasing returns to scale in the industry.

A discrete-choice model for large heterogeneous panels with interactive fixed effects with an application to the determinants of corporate bond issuance


What is the effect of funding costs on the conditional probability of issuing a corporate bond? We study this question in a novel dataset covering 5610 issuances by US firms over the period from 1990 to 2014. Identification of this effect is complicated because of unobserved, common shocks such as the global financial crisis. To account for these shocks, we extend the common correlated effects estimator to settings where outcomes are discrete. Both the asymptotic properties and the small-sample behavior of this estimator are documented. We find that for non-financial firms yields are negatively related to bond issuance but that the effect is larger in the pre-crisis period.

Unobserved selection heterogeneity and the gender wage gap


Selection correction methods usually make assumptions about selection itself. In the case of gender wage gap estimation, those assumptions are especially tenuous because of high female nonparticipation and because selection could be different in different parts of the labor market. This paper proposes an estimator for the wage gap that allows for arbitrary and unobserved heterogeneity in selection. It applies to the subpopulation of “always employed” women, which is similar to men in labor force characteristics. Using CPS data from 1976 to 2005, I show that the gap has narrowed substantially from a −0.521 to a −0.263 log wage point differential for this population.

Anchoring the yield curve using survey expectations


The dynamic behavior of the term structure of interest rates is difficult to replicate with models, and even models with a proven track record of empirical performance have underperformed since the early 2000s. On the other hand, survey expectations can accurately predict yields, but they are typically not available for all maturities and/or forecast horizons. We show how survey expectations can be exploited to improve the accuracy of yield curve forecasts given by a base model. We do so by employing a flexible exponential tilting method that anchors the model forecasts to the survey expectations, and we develop a test to guide the choice of the anchoring points. The method implicitly incorporates into yield curve forecasts any information that survey participants have access to—such as information about the current state of the economy or forward-looking information contained in monetary policy announcements—without the need to explicitly model it. We document that anchoring delivers large and significant gains in forecast accuracy relative to the class of models that are widely adopted by financial and policy institutions for forecasting the term structure of interest rates.

Structural FECM: Cointegration in large-scale structural FAVAR models


Starting from the dynamic factor model for nonstationary data we derive the factor-augmented error correction model (FECM) and its moving-average representation. The latter is used for the identification of structural shocks and their propagation mechanisms. We show how to implement classical identification schemes based on long-run restrictions in the case of large panels. The importance of the error correction mechanism for impulse response analysis is analyzed by means of both empirical examples and simulation experiments. Our results show that the bias in estimated impulse responses in a factor-augmented vector autoregressive (FAVAR) model is positively related to the strength of the error correction mechanism and the cross-section dimension of the panel. We observe empirically in a large panel of US data that these features have a substantial effect on the responses of several variables to the identified permanent real (productivity) and monetary policy shocks.

Model selection with estimated factors and idiosyncratic components


This paper provides consistent information criteria for the selection of forecasting models that use a subset of both the idiosyncratic and common factor components of a big dataset. This hybrid model approach has been explored by recent empirical studies to relax the strictness of pure factor-augmented model approximations, but no formal model selection procedures have been developed. The main difference to previous factor-augmented model selection procedures is that we must account for estimation error in the idiosyncratic component as well as the factors. Our main contribution is to show the conditions required for selection consistency of a class of information criteria that reflect this additional source of estimation error. We show that existing factor-augmented model selection criteria are inconsistent in circumstances where N is of larger order than T, where N and T are the cross-section and time series dimensions of the dataset respectively, and that the standard Bayesian information criterion is inconsistent regardless of the relationship between N and T. We therefore propose a new set of information criteria that guarantee selection consistency in the presence of estimated idiosyncratic components. The properties of these new criteria are explored through a Monte Carlo simulation study. The paper concludes with an empirical application to long-horizon exchange rate forecasting using a recently proposed model with country-specific idiosyncratic components from a panel of global exchange rates.

Efficient estimation of factor models with time and cross-sectional dependence


This paper studies the efficient estimation of large-dimensional factor models with both time and cross-sectional dependence assuming (N,T) separability of the covariance matrix. The asymptotic distribution of the estimator of the factor and factor-loading space under factor stationarity is derived and compared to that of the principal component (PC) estimator. The paper also considers the case when factors exhibit a unit root. We provide feasible estimators and show in a simulation study that they are more efficient than the PC estimator in finite samples. In application, the estimation procedure is employed to estimate the Lee–Carter model and life expectancy is forecast. The Dutch gender gap is explored and the relationship between life expectancy and the level of economic development is examined in a cross-country comparison.

Identifying relevant and irrelevant variables in sparse factor models


This paper considers factor estimation from heterogeneous data, where some of the variables—the relevant ones—are informative for estimating the factors, and others—the irrelevant ones—are not. We estimate the factor model within a Bayesian framework, specifying a sparse prior distribution for the factor loadings. Based on identified posterior factor loading estimates, we provide alternative methods to identify relevant and irrelevant variables. Simulations show that both types of variables are identified quite accurately. Empirical estimates for a large multi-country GDP dataset and a disaggregated inflation dataset for the USA show that a considerable share of variables is irrelevant for factor estimation.

Real exchange rate persistence and the excess return puzzle: The case of Switzerland versus the US


The PPP puzzle refers to the wide swings of nominal exchange rates around their long-run equilibrium values whereas the excess return puzzle represents the persistent deviation of the domestic-foreign interest rate differential from the expected change in the nominal exchange rate. Using the I(2) cointegrated VAR model, much of the excess return puzzle disappears when an uncertainty premium in the foreign exchange market, proxied by the persistent PPP gap, is introduced. Self-reinforcing feedback mechanisms seem to cause the persistence in the Swiss-US parity conditions. These results support imperfect knowledge based expectations rather than so-called “rational expectations”.

Fat tails and spurious estimation of consumption-based asset pricing models


The standard generalized method of moments (GMM) estimation of Euler equations in heterogeneous-agent consumption-based asset pricing models is inconsistent under fat tails because the GMM criterion is asymptotically random. To illustrate this, we generate asset returns and consumption data from an incomplete-market dynamic general equilibrium model that is analytically solvable and exhibits power laws in consumption. Monte Carlo experiments suggest that the standard GMM estimation is inconsistent and susceptible to Type II errors (incorrect nonrejection of false models). Estimating an overidentified model by dividing agents into age cohorts appears to mitigate Type I and II errors.

Dynamic spatial autoregressive models with autoregressive and heteroskedastic disturbances


We propose a new class of models specifically tailored for spatiotemporal data analysis. To this end, we generalize the spatial autoregressive model with autoregressive and heteroskedastic disturbances, that is, SARAR(1, 1), by exploiting the recent advancements in score-driven (SD) models typically used in time series econometrics. In particular, we allow for time-varying spatial autoregressive coefficients as well as time-varying regressor coefficients and cross-sectional standard deviations. We report an extensive Monte Carlo simulation study in order to investigate the finite-sample properties of the maximum likelihood estimator for the new class of models as well as its flexibility in explaining a misspecified dynamic spatial dependence process. The new proposed class of models is found to be economically preferred by rational investors through an application to portfolio optimization.

The cycle of violence in the Second Intifada: Causality in nonlinear vector autoregressive models


We contest Jaeger and Paserman's claim (Jaeger and Paserman , 2008. The cycle of violence? An empirical analysis of fatalities in the Palestinian–Israeli conflict. American Economic Review 98(4): 1591–1604) that Palestinians did not react to Israeli aggression during Intifada 2. We address the differences between the two sides in terms of the timing and intensity of violence, estimate nonlinear vector autoregression models that are suitable when the linear vector autoregression innovations are not normally distributed, identify causal effects rather than Granger causality using the principle of weak exogeneity, and introduce the “kill-ratio” as a concept for testing hypotheses about the cycle of violence. The Israelis killed 1.28 Palestinians for every killed Israeli, whereas the Palestinians killed only 0.09 Israelis for every killed Palestinian.