Scalable Control Synthesis for Multi-Robot Systems under Temporal Logic Specifications

Thumbnail Image



Journal Title

Journal ISSN

Volume Title

Repository Usage Stats



The study of high-level complex tasks for robotics, captured by temporal logics, e.g., Linear Temporal Logic (LTL), has gained significant research interest in the last decade, which extends the traditional point-to-point navigation by incorporating temporal goals. This dissertation proposes and evaluates scalable path planning and control synthesis methods for robotic systems under temporal logic specifications. The scalability is measured by the number of robots, the size of the environment, and the complexity of temporal logic specifications.

First, we consider the optimal control synthesis to satisfy the task specified by temporal logic specifications. Given the same discrete workspace, rather than solving a new formula from scratch, we propose a method that exploits experience from solving similar LTL tasks before. The key idea is to decompose complex LTL tasks into simpler subtasks appropriately and define sets of skills, or plans, needed to solve these subtasks. These skills can be stored in a library of reusable skills and can be used to quickly synthesize plans for new tasks that have not been encountered before, meanwhile expanding the library with new skills. We present numerical experiments that show that our approach generally outperforms these methods in terms of time to generate feasible plans. We also show that our proposed algorithm is probabilistically complete and asymptotically optimal.

Next, we consider the problem of optimally allocating tasks, expressed as global LTL specifications to teams of heterogeneous mobile robots. The robots are classified in different types that capture their different capabilities in accomplishing tasks, and each task may require robots of multiple types. The specific robots assigned to each task are immaterial, as long as they are of the desired type. Given a discrete workspace, our goal is to design paths, i.e., sequences of discrete states, for the robots so that the LTL specification is satisfied. To obtain a scalable solution to this complex assignment problem, we propose a hierarchical approach that first allocates specific robots to tasks using the information about tasks provided by the Nondeterministic Büchi Automaton (NBA) that captures the LTL specification, and then designs low-level executable plans for the robots that respect the high-level assignment. We provide theoretical results showing completeness and soundness of our proposed method and present numerical simulations demonstrating that our method can generate robot paths with lower cost, considerably faster than existing methods.

The majority of existing LTL planning methods rely on the construction of a discrete product automaton that combines a discrete abstraction of robot mobility and the NBA corresponding to the LTL specification. However, constructing expressive discrete abstractions makes the synthesis problem computationally intractable. Finally, we propose a new sampling-based LTL planning algorithm that does not require any discrete abstraction of robot mobility. Instead, it incrementally builds trees that explore the product state-space, until a maximum number of iterations is reached or a feasible plan is found. To accelerate the construction of feasible plans, we introduce bias in the sampling process which is guided by transitions in the Büchi automaton that belong to the shortest path to the accepting states. We show that our planning algorithm, with and without bias, is probabilistically complete and asymptotically optimal. Finally, we present numerical experiments showing that our method outperforms relevant temporal logic planning methods.





Luo, Xusheng (2020). Scalable Control Synthesis for Multi-Robot Systems under Temporal Logic Specifications. Dissertation, Duke University. Retrieved from


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.