Sciweavers

CF
2006
ACM

Exploiting locality to ameliorate packet queue contention and serialization

13 years 9 months ago
Exploiting locality to ameliorate packet queue contention and serialization
Packet processing systems maintain high throughput despite relatively high memory latencies by exploiting the coarse-grained parallelism available between packets. In particular, multiple processors are used to overlap the processing of multiple packets. Packet queuing—the fundamental mechanism enabling packet scheduling, differentiated services, and traffic isolation—requires a read-modify-write operation on a linked list data structure to enqueue and dequeue packets; this operation represents a potential serializing bottleneck. If all packets awaiting service are destined for different queues, these read-modify-write cycles can proceed in parallel. However, if all or many of the incoming packets are destined for the same queue, or for a small number of queues, then system throughput will be serialized by these sequential external memory operations. For this reason, low latency SRAMs are used to implement the queue data structures. This reduces the absolute cost of serialization ...
Sailesh Kumar, John Maschmeyer, Patrick Crowley
Added 13 Jun 2010
Updated 13 Jun 2010
Type Conference
Year 2006
Where CF
Authors Sailesh Kumar, John Maschmeyer, Patrick Crowley
Comments (0)