Robust Forecast Superiority Testing with an Application to Assessing Pools of Expert Forecasters

DOI

We develop a forecast superiority testing methodology which is robust to the choice of loss function. Following Jin, Corradi and Swanson (JCS: 2017), we rely on a mapping between generic loss forecast evaluation and stochastic dominance principles. However, unlike JCS tests, which are not uniformly valid, and have correct asymptotic size only under the least favorable case, our tests are uniformly asymptotically valid and non-conservative. These properties are derived by first establishing uniform convergence (over error support) of HAC variance estimators. Monte Carlo experiments indicate good finite sample performance of the new tests, and an empirical illustration suggests that prior forecast accuracy matters in the Survey of Professional Forecasters. Namely, for our longest forecast horizons (4 quarters ahead), selecting pools of expert forecasters based on prior accuracy results in ensemble forecasts that are superior to those based on forming simple averages and medians from the entire panel of experts.

Identifier
DOI https://doi.org/10.15456/jae.2023004.1604544168
Metadata Access https://www.da-ra.de/oaip/oai?verb=GetRecord&metadataPrefix=oai_dc&identifier=oai:oai.da-ra.de:776703
Provenance
Creator Corradi, Valentina; Jin, Sainan; Swanson, Norman R.
Publisher ZBW - Leibniz Informationszentrum Wirtschaft
Publication Year 2023
Rights Creative Commons Attribution 4.0 (CC-BY); Download
OpenAccess true
Contact ZBW - Leibniz Informationszentrum Wirtschaft
Representation
Language English
Resource Type Collection
Discipline Economics