The Verification of Probabilistic Forecasts in Decision and Risk Analysis

dc.contributor.advisor

Winkler, Robert L

dc.contributor.author

Jose, Victor Richmond

dc.date.accessioned

2009-05-01T18:43:29Z

dc.date.available

2011-07-26T04:30:03Z

dc.date.issued

2009

dc.department

Business Administration

dc.description.abstract

Probability forecasts play an important role in many decision and risk analysis applications. Research and practice over the years have shown that the shift towards distributional forecasts provides a more accurate and appropriate means of capturing risk in models for these applications. This means that mathematical tools for analyzing the quality of these forecasts, may it come from experts, models or data, become important to the decision maker. In this regard, strictly proper scoring rules have been widely studied because of their ability to encourage assessors to provide truthful reports. This dissertation contributes to the scoring rule literature in two main areas of assessment - probability forecasts and quantile assessments.

In the area of probability assessment, scoring rules typically studied in the literature, and commonly used in practice, evaluate probability assessments relative to a default uniform measure. In many applications, the uniform baseline used to represent some notion of ignorance is inappropriate. In this dissertation, we generalize the power and pseudospherical family of scoring rules, two large parametric families of commonly-used scoring rules, by incorporating the notion of a non-uniform baseline distribution for both the discrete and continuous cases. With an appropriate normalization and choice of parameters, we show that these new families of scoring rules relate to various well-known divergence measures from information theory and to well-founded decision models when framed in an expected utility maximization context.

In applications where the probability space considered has an ordinal ranking between states, an important property often considered is sensitivity to distance. Scoring rules with this property provide higher scores to assessments that allocate higher probability mass to events “closer” to that which occurs based on some notion of distance. In this setting, we provide an approach that allows us to generate new sensitive to distance strictly proper scoring rules from well-known strictly proper binary scoring rules. Through the use of the weighted scoring rules, we also show that these new scores can incorporate a specified baseline distribution, in addition to being strictly proper and sensitive to distance.

In the inverse problem of quantile assessment, scoring rules have not yet been well-studied and well-developed. We examine the differences between scoring rules for probability and quantile assessments, and demonstrate why the tools that have been developed for probability assessments no longer encourage truthful reporting when used for quantile assessments. In addition, we shed light on new properties and characterizations for some of these rules that could guide decision makers trying to choosing an appropriate scoring rule.

dc.format.extent

3235664 bytes

dc.format.mimetype

application/pdf

dc.identifier.uri

https://hdl.handle.net/10161/1270

dc.language.iso

en_US

dc.subject

Business Administration, General

dc.subject

decision analysis

dc.subject

entropy

dc.subject

forecast verification

dc.subject

probability elicitation

dc.subject

quantile assessment

dc.subject

scoring rules

dc.title

The Verification of Probabilistic Forecasts in Decision and Risk Analysis

dc.type

Dissertation

duke.embargo.months

24

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
D_Jose_Victor Richmond_a_200904.pdf
Size:
3.09 MB
Format:
Adobe Portable Document Format

Collections