This paper examines the impact of data rescaling and measurement error on scoring rules for distribution forecast. First, I show that all commonly used scoring rules for distribution forecasts are robust to rescaling the data. Second, the forecast ranking based on the continuous ranked probability score is less sensitive to gross measurement error than the ranking based on the log score. The theoretical results are complemented by a simulation study aligned with frequently revised quarterly US GDP growth data, a simulation study aligned with financial market volatility, and an empirical application forecasting realized variances of S&P 100 constituents.