Renovating the Search Experience of Art Image Databases
With the emergence of the computer, the digital image has become one of the most prevalent visual mediums in the 21st century. This paper aims at analyzing the current limitations in interacting with digital image databases in art historical research. In response to the limitations of current image database structures, this paper explores how emerging computer vision technologies can be applied to enrich the ways database users interact with art image databases. While current image databases primarily rely on manually-defined metadata and textual descriptions to associate art images, this digital project demonstrates how deep neural networks can add visual connections between art images through feature extraction algorithms. This thesis documents a digital product that demonstrates how deep neural network models can extract images’ visual features and connect art images by these visual features. Although it offers a new approach to the problem, this digital project is not intended to replace the existed metadata structure and text-based search in existing image database system. Metadata and text-based search have developed over time to assist people in managing data and navigating the digital world in the era of big data. As such, this digital project offers to overlay a visual-driven search path upon the existing database structure in order to provide a more diverse search environment.
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License.
Rights for Collection: Masters Theses
Works are deposited here by their authors, and represent their research and opinions, not that of Duke University. Some materials and descriptions may include offensive content. More info