Browsing by Subject "Fairness"
- Results Per Page
- Sort Options
Item Open Access ADVANCING VISION INTELLIGENCE THROUGH THE DEVELOPMENT OF EFFICIENCY, INTERPRETABILITY AND FAIRNESS IN DEEP LEARNING MODELS(2024) Kong, FanjieDeep learning has demonstrated remarkable success in developing vision intelligence across a variety of application domains, including autonomous driving, facial recognition, medical image analysis, \etc.However, developing such vision systems poses significant challenges, particularly in relation to ensuring efficiency, interpretability, and fairness. Efficiency requires a model to leverage the least possible computational resources while preserving performance relative to more computationally-demanding alternatives, which is essential for the practical deployment of large-scale models in real-time applications. Interpretability demands a model to align with the domain-specific knowledge of the task it addresses while having the capability for case-based reasoning. This characteristic is especially crucial in high-stakes areas such as healthcare, criminal justice, and financial investment. Fairness ensures that computer vision models do not perpetuate or exacerbate societal biases in downstream applications such as web image search, text-guided image generation, \etc. In this dissertation, I will discuss the contributions that I have made in advancing vision intelligence regarding to efficiency, interpretability and fairness in computer vision models.
The first part of this dissertation will focus on how to design computer vision models to efficiently process very large images.We propose a novel CNN architecture termed { \em Zoom-In Network} that leverages a hierarchical attention sampling mechanisms to select important regions of images to process. Such approach without processing the entire image yields outstanding memory efficiency while maintaining classification accuracy on various tiny object image classification datasets.
The second part of this dissertation will discuss how to build post-hoc interpretation method for deep learning models to obtain insights reasoned from the predictions.We propose a novel image and text insight-generation framework based on attributions from deep neural nets. We test our approach on an industrial dataset and demonstrate our method outperforms competing methods.
Finally, we study fairness in large vision-language models.More specifically, we examined gender and racial bias in text-based image retrieval for neutral text queries. In an attempt to address bias in the test-time phase, we proposed post-hoc bias mitigation to actively balance the demographic group in the image search results. Experiments on multiple datasets show that our method can significantly reduce bias while maintaining satisfactory retrieval accuracy at the same time.
My research in enhancing vision intelligence via developments in efficiency, interpretability, and fairness, has undergone rigorous validation using publicly available benchmarks and has been recognized at leading peer-reviewed machine learning conferences.This dissertation has sparked interest within the AI community, emphasizing the importance of improving computer vision models through these three critical dimensions, namely, efficiency, interpretability and fairness.
Item Open Access Algorithms for Public Decision Making(2019) Fain, Brandon ThomasIn public decision making, we are confronted with the problem of aggregating the conflicting preferences of many individuals about outcomes that affect the group. Examples of public decision making include allocating shared public resources and social choice or voting. We study these problems from the perspective of an algorithm designer who takes the preferences of the individuals and the constraints of the decision making problem as input and efficiently computes a solution with provable guarantees with respect to fairness and welfare, as defined on individual preferences.
Concerning fairness, we develop the theory of group fairness as core or proportionality in the allocation of public goods. The core is a stability based notion adapted from cooperative game theory, and we show extensive algorithmic connections between the core solution concept and optimizing the Nash social welfare, the geometric mean of utilities. We explore applications in public budgeting, multi-issue voting, memory sharing, and fair clustering in unsupervised machine learning.
Regarding welfare, we extend recent work in implicit utilitarian social choice to choose approximately optimal public outcomes with respect to underlying cardinal valuations using limited ordinal information. We propose simple randomized algorithms with strong utilitarian social cost guarantees when the space of outcomes is metric. We also study many other desirable properties of our algorithms, including approximating the second moment of utilitarian social cost. We explore applications in voting for public projects, preference elicitation, and deliberation.
Item Open Access Minimax Fairness in Machine Learning(2022) Martinez Gil, Natalia LucienneThe notion of fairness in machine learning has gained significant popularity in the last decades, in part due to the large number of decision-making models that are being deployed on real-world applications, which have presented unwanted behavior. In this work, we analyze fairness in machine learning from a multi-objective optimization perspective, where the goal is to learn a model that achieves a good performance across different groups or demographics. In particular, we analyze how to achieve models that are efficient in the Pareto sense, providing the best performance for the worst group (i.e., minimax solutions). We study how to achieve minimax Pareto fair solutions when sensitive groups are available at training time, and also when the demographics are completely unknown. We provide experimental results showing how the discussed techniques to achieve minimax Pareto fair solutions perform on classification tasks, and how they can be adapted to work on other applications such as backward compatibility and federated learning. Finally, we analyze the problem of achieving minimax solutions asymptotically when we optimize models that can perfectly fit their training data, such as deep neural networks trained with stochastic gradient descent.
Item Open Access Young children are more willing to accept group decisions in which they have had a voice.(J Exp Child Psychol, 2018-02) Grocke, Patricia; Rossano, Federico; Tomasello, MichaelPeople accept an unequal distribution of resources if they judge that the decision-making process was fair. In this study, 3- and 5-year-old children played an allocation game with two puppets. The puppets decided against a fair distribution in all conditions, but they allowed children to have various degrees of participation in the decision-making process. Children of both ages protested less when they were first asked to agree with the puppets' decision compared with when there was no agreement. When ignored, the younger children protested less than the older children-perhaps because they did not expect to have a say in the process-whereas they protested more when they were given an opportunity to voice their opinion-perhaps because their stated opinion was ignored. These results suggest that during the preschool years, children begin to expect to be asked for their opinion in a decision, and they accept disadvantageous decisions if they feel that they have had a voice in the decision-making process.Item Open Access Young children, but not chimpanzees, are averse to disadvantageous and advantageous inequities.(J Exp Child Psychol, 2017-03) Ulber, Julia; Hamann, Katharina; Tomasello, MichaelThe age at which young children show an aversion to inequitable resource distributions, especially those favoring themselves, is unclear. It is also unclear whether great apes, as humans' nearest evolutionary relatives, have an aversion to inequitable resource distributions at all. Using a common methodology across species and child ages, the current two studies found that 3- and 4-year-old children (N=64) not only objected when they received less than a collaborative partner but also sacrificed to equalize when they received more. They did neither of these things in a nonsocial situation, demonstrating the fundamental role of social comparison. In contrast, chimpanzees (N=9) showed no aversion to inequitable distributions, only a concern for maximizing their own resources, with no differences between social and nonsocial conditions. These results underscore the unique importance for humans, even early in ontogeny, for treating others fairly, presumably as a way of becoming a cooperative member of one's cultural group.