Sciweavers

34 search results - page 6 / 7
» Reducing Replication Overhead for Data Durability in DHT Bas...
Sort
View
NSDI
2004
13 years 7 months ago
Designing a DHT for Low Latency and High Throughput
Designing a wide-area distributed hash table (DHT) that provides high-throughput and low-latency network storage is a challenge. Existing systems have explored a range of solution...
Frank Dabek, Jinyang Li, Emil Sit, James Robertson...
SRDS
2005
IEEE
13 years 11 months ago
Agile Store: Experience with Quorum-Based Data Replication Techniques for Adaptive Byzantine Fault Tolerance
Quorum protocols offer several benefits when used to maintain replicated data but techniques for reducing overheads associated with them have not been explored in detail. It is d...
Lei Kong, Deepak J. Manohar, Mustaque Ahamad, Arun...
SIGOPSE
2004
ACM
13 years 11 months ago
Scalable strong consistency for web applications
Web application workloads are often characterized by a large number of unique read requests and a significant fraction of write requests. Hosting these applications drives the ne...
Swaminathan Sivasubramanian, Guillaume Pierre, Maa...
ICDCS
2009
IEEE
14 years 23 days ago
On Optimal Concurrency Control for Optimistic Replication
Concurrency control is a core component in optimistic replication systems. To detect concurrent updates, the system associates each replicated object with metadata, such as, versi...
Weihan Wang, Cristiana Amza
SC
2004
ACM
13 years 11 months ago
A Peer-to-Peer Replica Location Service Based on a Distributed Hash Table
A Replica Location Service (RLS) allows registration and discovery of data replicas. In earlier work, we proposed an RLS framework and described the performance and scalability of...
Min Cai, Ann L. Chervenak, Martin R. Frank