Sciweavers

745 search results - page 18 / 149
» Reliability Design for Large Scale Data Warehouses
Sort
View
HPDC
2000
IEEE
15 years 2 months ago
Creating Large Scale Database Servers
The BaBar experiment at the Stanford Linear Accelerator Center (SLAC) is designed to perform a high precision investigation of the decays of the B-meson produced from electron-pos...
Jacek Becla, Andrew Hanushevsky
74
Voted
NIPS
2007
14 years 11 months ago
Random Features for Large-Scale Kernel Machines
To accelerate the training of kernel machines, we propose to map the input data to a randomized low-dimensional feature space and then apply existing fast linear methods. The feat...
Ali Rahimi, Benjamin Recht
SIGCOMM
2010
ACM
14 years 10 months ago
Detecting the performance impact of upgrades in large operational networks
Networks continue to change to support new applications, improve reliability and performance and reduce the operational cost. The changes are made to the network in the form of up...
Ajay Anil Mahimkar, Han Hee Song, Zihui Ge, Aman S...
SIGCOMM
1998
ACM
15 years 1 months ago
A Digital Fountain Approach to Reliable Distribution of Bulk Data
—The proliferation of applications that must reliably distribute bulk data to a large number of autonomous clients motivates the design of new multicast and broadcast protocols. ...
John W. Byers, Michael Luby, Michael Mitzenmacher,...
NIPS
2003
14 years 11 months ago
Large Scale Online Learning
We consider situations where training data is abundant and computing resources are comparatively scarce. We argue that suitably designed online learning algorithms asymptotically ...
Léon Bottou, Yann LeCun