Sciweavers

ICLP
2011
Springer

Minimizing the overheads of dependent {AND}-parallelism

12 years 8 months ago
Minimizing the overheads of dependent {AND}-parallelism
Parallel implementations of programming languages need to control synchronization overheads. Synchronization is essential for ensuring the correctness of parallel code, yet it adds overheads that aren’t present in sequential programs. This is an important problem for parallel logic programming systems, because almost every action in such programs requires accessing variables, and the traditional approach of adding synchronization code to all such accesses is so prohibitively expensive that a parallel version of the program may run more slowly on four processors than a sequential version would run on one processor. We present a program transformation for implementing dependent AND-parallelism in logic programming languages that uses mode information to add synchronization code only to the variable accesses that actually need it. 1998 ACM Subject Classification D.3.3 Concurrent Programming Structures, D.3.4 Compilers Keywords and phrases synchronization, program transformation
Peter Wang, Zoltan Somogyi
Added 29 Aug 2011
Updated 29 Aug 2011
Type Journal
Year 2011
Where ICLP
Authors Peter Wang, Zoltan Somogyi
Comments (0)