Task Affinity and Its Applications in Machine Learning

Thumbnail Image



Journal Title

Journal ISSN

Volume Title

Repository Usage Stats



Transfer learning has been an essential aspect of machine learning, in which the knowledge from previously trained tasks is utilized to learn the incoming tasks. Recent works on transfer learning primarily focus on learning algorithms for the scenario of a learned source task and a target task. Here, the goal is to identify the source model's functional layers and corresponding parameters to fine-tune with the target task's dataset. It is established in the literature that similar tasks can share equivalent learning models (e.g., architecture, number of layers, parameters.) As a result, selecting relevant data samples and the trained models for the target task is also essential to the success of transfer learning. However, this area of research has yet to be thoroughly studied. This work focuses on task affinity, which is a similarity measure between tasks, and its applications in machine learning through a transfer learning process. Based on the Fisher Information matrices, the proposed task affinity is non-symmetric by definition due to the fact that it is easier to transfer the knowledge from a complex and comprehensive task to a simple task than vice versa. The task affinity helps determine the relevant source tasks, their corresponding datasets, and trained models for the target tasks. Additionally, a meta-learning framework, whose goal is learning to learn, is introduced based on the proposed task affinity. This framework is designed for a scenario with multiple learned source tasks and a target task. Here, the artificial intelligent agent is assumed to have sufficiently extensive memory for storing learned tasks (e.g., trained models and datasets). The meta-learning framework allows this agent to identify relevant knowledge from the source tasks and quickly learn the target task without human domain knowledge. This framework is motivated by the learning process of humans, which starts by learning simple and basic tasks before tackling more advanced subjects. For instance, when solving complicated tasks, humans often relate them to more straightforward tasks. This framework also helps reduce the amount of required data samples from the target task and further boosts the model's performance. Overall, this dissertation presents the definitions of task affinity and the meta-learning frameworks for various applications in machine learning, such as neural architecture search, few-shot learning, image generation, and causal inference. The theoretical and empirical studies indicate the consistency of the task affinity and the efficacy of the proposed framework compared with other state-of-the-art approaches to machine learning applications.





Le, Cat Phuoc (2023). Task Affinity and Its Applications in Machine Learning. Dissertation, Duke University. Retrieved from https://hdl.handle.net/10161/27697.


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.