Show simple item record

dc.contributor.advisor Winkler, Robert L en_US
dc.contributor.author Jose, Victor Richmond en_US
dc.date.accessioned 2009-05-01T18:43:29Z
dc.date.available 2011-07-26T04:30:03Z
dc.date.issued 2009 en_US
dc.identifier.uri http://hdl.handle.net/10161/1270
dc.description Dissertation en_US
dc.description.abstract <p> Probability forecasts play an important role in many decision and risk analysis applications. Research and practice over the years have shown that the shift towards distributional forecasts provides a more accurate and appropriate means of capturing risk in models for these applications. This means that mathematical tools for analyzing the quality of these forecasts, may it come from experts, models or data, become important to the decision maker. In this regard, strictly proper scoring rules have been widely studied because of their ability to encourage assessors to provide truthful reports. This dissertation contributes to the scoring rule literature in two main areas of assessment - probability forecasts and quantile assessments. </p><p>In the area of probability assessment, scoring rules typically studied in the literature, and commonly used in practice, evaluate probability assessments relative to a default uniform measure. In many applications, the uniform baseline used to represent some notion of ignorance is inappropriate. In this dissertation, we generalize the power and pseudospherical family of scoring rules, two large parametric families of commonly-used scoring rules, by incorporating the notion of a non-uniform baseline distribution for both the discrete and continuous cases. With an appropriate normalization and choice of parameters, we show that these new families of scoring rules relate to various well-known divergence measures from information theory and to well-founded decision models when framed in an expected utility maximization context. </p><p>In applications where the probability space considered has an ordinal ranking between states, an important property often considered is sensitivity to distance. Scoring rules with this property provide higher scores to assessments that allocate higher probability mass to events “closer” to that which occurs based on some notion of distance. In this setting, we provide an approach that allows us to generate new sensitive to distance strictly proper scoring rules from well-known strictly proper binary scoring rules. Through the use of the weighted scoring rules, we also show that these new scores can incorporate a specified baseline distribution, in addition to being strictly proper and sensitive to distance. </p><p>In the inverse problem of quantile assessment, scoring rules have not yet been well-studied and well-developed. We examine the differences between scoring rules for probability and quantile assessments, and demonstrate why the tools that have been developed for probability assessments no longer encourage truthful reporting when used for quantile assessments. In addition, we shed light on new properties and characterizations for some of these rules that could guide decision makers trying to choosing an appropriate scoring rule. </p> en_US
dc.format.extent 3235664 bytes
dc.format.mimetype application/pdf
dc.language.iso en_US
dc.subject Business Administration, General en_US
dc.subject decision analysis en_US
dc.subject entropy en_US
dc.subject forecast verification en_US
dc.subject probability elicitation en_US
dc.subject quantile assessment en_US
dc.subject scoring rules en_US
dc.title The Verification of Probabilistic Forecasts in Decision and Risk Analysis en_US
dc.type Dissertation en_US
dc.department Business Administration en_US
duke.embargo.months 24 en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record