Annotation of phenotypes using ontologies: a gold standard for the training and evaluation of natural language processing systems.

dc.contributor.author

Dahdul, Wasila

dc.contributor.author

Manda, Prashanti

dc.contributor.author

Cui, Hong

dc.contributor.author

Balhoff, James P

dc.contributor.author

Dececchi, T Alexander

dc.contributor.author

Ibrahim, Nizar

dc.contributor.author

Lapp, Hilmar

dc.contributor.author

Vision, Todd

dc.contributor.author

Mabee, Paula M

dc.date.accessioned

2023-02-07T20:31:37Z

dc.date.available

2023-02-07T20:31:37Z

dc.date.issued

2018-01

dc.date.updated

2023-02-07T20:31:31Z

dc.description.abstract

Natural language descriptions of organismal phenotypes, a principal object of study in biology, are abundant in the biological literature. Expressing these phenotypes as logical statements using ontologies would enable large-scale analysis on phenotypic information from diverse systems. However, considerable human effort is required to make these phenotype descriptions amenable to machine reasoning. Natural language processing tools have been developed to facilitate this task, and the training and evaluation of these tools depend on the availability of high quality, manually annotated gold standard data sets. We describe the development of an expert-curated gold standard data set of annotated phenotypes for evolutionary biology. The gold standard was developed for the curation of complex comparative phenotypes for the Phenoscape project. It was created by consensus among three curators and consists of entity-quality expressions of varying complexity. We use the gold standard to evaluate annotations created by human curators and those generated by the Semantic CharaParser tool. Using four annotation accuracy metrics that can account for any level of relationship between terms from two phenotype annotations, we found that machine-human consistency, or similarity, was significantly lower than inter-curator (human-human) consistency. Surprisingly, allowing curatorsaccess to external information did not significantly increase the similarity of their annotations to the gold standard or have a significant effect on inter-curator consistency. We found that the similarity of machine annotations to the gold standard increased after new relevant ontology terms had been added. Evaluation by the original authors of the character descriptions indicated that the gold standard annotations came closer to representing their intended meaning than did either the curator or machine annotations. These findings point toward ways to better design software to augment human curators and the use of the gold standard corpus will allow training and assessment of new tools to improve phenotype annotation accuracy at scale.

dc.identifier

5255130

dc.identifier.issn

1758-0463

dc.identifier.issn

1758-0463

dc.identifier.uri

https://hdl.handle.net/10161/26579

dc.language

eng

dc.publisher

Oxford University Press (OUP)

dc.relation.ispartof

Database : the journal of biological databases and curation

dc.relation.isversionof

10.1093/database/bay110

dc.subject

Humans

dc.subject

Phenotype

dc.subject

Natural Language Processing

dc.subject

Data Mining

dc.subject

Gene Ontology

dc.subject

Data Curation

dc.title

Annotation of phenotypes using ontologies: a gold standard for the training and evaluation of natural language processing systems.

dc.type

Journal article

duke.contributor.orcid

Lapp, Hilmar|0000-0001-9107-0714

pubs.organisational-group

Duke

pubs.organisational-group

Staff

pubs.publication-status

Published

pubs.volume

2018

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Annotation of phenotypes using ontologies a gold standard for the training and evaluation of natural language processing sys.pdf
Size:
1.03 MB
Format:
Adobe Portable Document Format
Description:
Published version