Abstract
We proposed in Ref. 5) a new, message-oriented implementation technique for Moded Flat GHC that compiled unification for data transfer into message passing. The technique was based on constraint-based program analysis, and significantly improved the performance of programs that used goals and streams to implement reconfigurable data structures. In this paper we discuss how the technique can be parallelized. We focus on a method for shared-memory multiprocessors, called the shared-goal method, though a different method could be used for distributed-memory multiprocessors. Unlike other parallel implementations of concurrent logic languages which we call process-oriented, the unit of parallel execution is not an individual goal but a chain of message sends caused successively by an initial message send. Parallelism comes from the existence of different chains of message sends that can be executed independently or in a pipelined manner. Mutual exclusion based on busy waiting and on message buffering controls access to individual, shared goals. Typical goals allow last-send optimization, the message-oriented counterpart of last-call optimization. We have built an experimental implementation on Sequent Symmetry. In spite of the simple scheduling currently adopted, preliminary evaluation shows good parallel speedup and good absolute performance for concurrent operations on binary process trees.
Original language | English |
---|---|
Pages (from-to) | 323-341 |
Number of pages | 19 |
Journal | New Generation Computing |
Volume | 11 |
Issue number | 3-4 |
DOIs | |
Publication status | Published - 1993 Sept 1 |
Keywords
- Concurrent Logic Programming
- GHC
- Moded Flat GHC
- Mutual Exclusion
- Optimization
- Parallelism
- Shared-Memory Multiprocessors
ASJC Scopus subject areas
- Software
- Theoretical Computer Science
- Hardware and Architecture
- Computer Networks and Communications