System Support for Strong Accountability

Thumbnail Image



Journal Title

Journal ISSN

Volume Title

Repository Usage Stats



Computer systems not only provide unprecedented efficiency and

numerous benefits, but also offer powerful means and tools for

abuse. This reality is increasingly more evident as deployed software

spans across trust domains and enables the interactions of

self-interested participants with potentially conflicting goals. With

systems growing more complex and interdependent, there is a growing

need to localize, identify, and isolate faults and unfaithful behavior.

Conventional techniques for building secure systems, such as secure

perimeters and Byzantine fault tolerance, are insufficient to ensure

that trusted users and software components are indeed

trustworthy. Secure perimeters do not work across trust domains and fail

when a participant acts within the limits of the existing security

policy and deliberately manipulates the system to her own

advantage. Byzantine fault tolerance offers techniques to tolerate

misbehavior, but offers no protection when replicas collude or are

under the control of a single entity.

Complex interdependent systems necessitate new mechanisms that

complement the existing solutions to identify improper behavior and

actions, limit the propagation of incorrect information, and assign

responsibility when things go wrong. This thesis

addresses the problems of misbehavior and abuse by offering tools and

techniques to integrate accountability into computer systems. A

system is accountable if it offers means to identify and expose

semantic misbehavior by its participants. An accountable system

can construct undeniable evidence to demonstrate its correctness---the

evidence serves as explicit proof of misbehavior and can be strong enough

to be used as a basis for social sanction external to the


Accountability offers strong disincentives for abuse and

misbehavior but may have to be ``designed-in'' to an application's

specific protocols, logic, and internal representation; achieving

accountability using general techniques is a challenge. Extending

responsibility to end users for actions performed by software

components on their behalf is not trivial, as it requires an ability

to determine whether a component correctly represents a

user's intentions. Leaks of private information are yet another

concern---even correctly functioning

applications can leak sensitive information, for which their owners

may be accountable. Important infrastructure services, such as

distributed virtual resource economies, offer a range of application-specific

issues such as fine-grain resource delegation, virtual

currency models, and complex work-flows.

This thesis work addresses the aforementioned problems by designing,

implementing, applying, and evaluating a generic methodology for

integrating accountability into network services and applications. Our

state-based approach decouples application state management from

application logic to enable services to demonstrate that they maintain

their state in compliance with user requests, i.e., state changes do take

place, and the service presents a consistent view to all clients and

observers. Internal state managed in this way, can then be used to feed

application-specific verifiers to determine the correctness the service's

logic and to identify the responsible party. The state-based approach

provides support for strong accountability---any detected violation

can be proven to a third party without depending on replication and


In addition to the generic state-based approach, this thesis explores how

to leverage application-specific knowledge to integrate accountability in

an example application. We study the invariants and accountability

requirements of an example application--- a lease-based virtual resource

economy. We present the design and implementation of several key elements

needed to provide accountability in the system. In particular, we describe

solutions to the problems of resource delegation, currency spending, and

lease protocol compliance. These solutions illustrate a complementary

technique to the general-purpose state-based approach, developed in the

earlier parts of this thesis.

Separating the actions of software and its user is at the heart of the

third component of this dissertation. We design, implement, and evaluate

an approach to detect information leaks in a commodity operating system.

Our novel OS abstraction---a doppelganger process---helps track

information flow without requiring application rewrite or instrumentation.

Doppelganger processes help identify sensitive data as they are about to

leave the confines of the system. Users can then be alerted about the

potential breach and can choose to prevent the leak to avoid becoming

accountable for the actions of software acting on their behalf.





Yumerefendi, Aydan Rafet (2009). System Support for Strong Accountability. Dissertation, Duke University. Retrieved from


Dukes student scholarship is made available to the public using a Creative Commons Attribution / Non-commercial / No derivative (CC-BY-NC-ND) license.