Show simple item record

dc.contributor.advisor Lebeck, Alvin R en_US
dc.contributor.author Agrawal, Sandeep R. en_US
dc.date.accessioned 2015-05-12T20:46:46Z
dc.date.available 2015-05-12T20:46:46Z
dc.date.issued 2015 en_US
dc.identifier.uri http://hdl.handle.net/10161/9957
dc.description Dissertation en_US
dc.description.abstract <p>Trends in increasing web traffic demand an increase in server throughput while preserving energy efficiency and total cost of ownership. Present work in optimizing data center efficiency primarily focuses on using general purpose processors, however these might not be the most efficient platforms for server workloads. Data parallel hardware achieves high energy efficiency by amortizing instruction costs across multiple data streams, and high throughput by enabling massive parallelism across independent threads. These benefits are considered traditionally applicable to scientific workloads, and common server tasks like page serving or search are considered unsuitable for a data parallel execution model.</p><p>Our work builds on the observation that server workload execution patterns are not completely unique across multiple requests. For a high enough arrival rate, a server has the opportunity to launch cohorts of similar requests on data parallel hardware, improving server performance and power/energy efficiency. We present a framework---called Rhythm---for high throughput servers that can exploit similarity across requests to improve server performance and power/energy efficiency by launching data parallel executions for request cohorts. An implementation of the SPECWeb Banking workload using Rhythm on NVIDIA GPUs provides a basis for evaluation. </p><p>Similarity search is another ubiquitous server workload that involves identifying the nearest neighbors to a given query across a large number of points. We explore the performance, power and dollar benefits of using accelerators to perform similarity search for query cohorts in very high dimensions under tight deadlines, and demonstrate an implementation on GPUs that searches across a corpus of billions of documents and is significantly cheaper than commercial deployments. We show that with software and system modifications, data parallel designs can greatly outperform common task parallel implementations.</p> en_US
dc.subject Computer science en_US
dc.subject Computer engineering en_US
dc.subject Datacenters en_US
dc.subject Energy efficiency en_US
dc.subject GPU en_US
dc.subject Server workloads en_US
dc.subject SIMD execution en_US
dc.subject Text Search en_US
dc.title Harnessing Data Parallel Hardware for Server Workloads en_US
dc.type Dissertation en_US
dc.department Computer Science en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record