dc.description.abstract |
This dissertation presents a basic information-driven sensor management problem, referred
to as treasure hunt, that is relevant to mobile-sensor applications such as mine hunting,
monitoring, and surveillance. The objective is to classify/infer one or multiple fixed
targets or treasures located in an obstacle-populated workspace by planning the path
and a sequence of measurements of a robotic sensor installed on a mobile platform
associated with the treasures distributed in the sensor workspace. The workspace is
represented by a connectivity graph, where each node represents a possible sensor
deployment, and the arcs represent possible sensor movements. A methodology is developed
for planning the sensing strategy of a robotic sensor deployed. The sensing strategy
includes the robotic sensor's path, because it determines which targets are measurable
given a bounded field of view. Existing path planning techniques are not directly
applicable to robots whose primary objective is to gather sensor measurements. Thus,
in this dissertation, a novel approximate cell-decomposition approach is developed
in which obstacles, targets, the sensor's platform and field of view are represented
as closed and bounded subsets of an Euclidean workspace. The approach constructs a
connectivity graph with observation cells that is pruned and transformed into a decision
tree, from which an optimal sensing strategy can be computed. It is shown that an
additive incremental-entropy function can be used to efficiently compute the expected
information value of the measurement sequence over time.
The methodology is applied to a robotic landmine classification problem and the board
game of CLUE$^{\circledR}$. In the landmine detection application, the optimal strategy
of a robotic ground-penetrating radar is computed based on prior remote measurements
and environmental information. Extensive numerical experiments show that this methodology
outperforms shortest-path, complete-coverage, random, and grid search strategies,
and is applicable to non-overpass capable platforms that must avoid targets as well
as obstacles. The board game of CLUE$^{\circledR}$ is shown to be an excellent benchmark
example of treasure hunt problem. The test results show that a player implementing
the strategies developed in this dissertation outperforms players implementing Bayesian
networks only, Q-learning, or constraint satisfaction, as well as human players.
|
|