TY - GEN
T1 - A policy-aware parallel execution control mechanism for language application
AU - Trang, Mai Xuan
AU - Murakami, Yohei
AU - Ishida, Toru
N1 - Funding Information:
This research was partly supported by a Grant-in-Aid for Scientific Research (S) (24220002, 2012–2016) from Japan Society for Promotion of Science (JSPS).
Publisher Copyright:
© Springer International Publishing Switzerland 2016.
PY - 2016
Y1 - 2016
N2 - Many language resources have been shared as web services to process data on the internet. As data sets keep growing, language services are experiencing more big data problems, such as challenging demands on storage and processing caused by very large data sets such as huge amounts of multilingual texts. Handling big data volumes like this requires parallel computing architectures. Parallel execution is one way to improve performance of language services when processing huge amounts of data. The large data set is partitioned and multiples processes of the language service are executed concurrently. However, due to limitation of computing resources, service providers employ policies to limit number of concurrent processes that their services could serve. In an advanced language application, several language services, provided by different providers with different policies, are combined in a composite service to handle complex tasks. If parallel execution is used for greater efficiency of a language application we need to optimize the parallel configuration by working with the language service policies of all participating providers. We propose a model that considers the atomic language service policies when predicting composite service performance. Based on this model, we design a mechanism that adapts parallel execution setting of a composite service to atomic services’ policies in order to attain optimal performance for the language application.
AB - Many language resources have been shared as web services to process data on the internet. As data sets keep growing, language services are experiencing more big data problems, such as challenging demands on storage and processing caused by very large data sets such as huge amounts of multilingual texts. Handling big data volumes like this requires parallel computing architectures. Parallel execution is one way to improve performance of language services when processing huge amounts of data. The large data set is partitioned and multiples processes of the language service are executed concurrently. However, due to limitation of computing resources, service providers employ policies to limit number of concurrent processes that their services could serve. In an advanced language application, several language services, provided by different providers with different policies, are combined in a composite service to handle complex tasks. If parallel execution is used for greater efficiency of a language application we need to optimize the parallel configuration by working with the language service policies of all participating providers. We propose a model that considers the atomic language service policies when predicting composite service performance. Based on this model, we design a mechanism that adapts parallel execution setting of a composite service to atomic services’ policies in order to attain optimal performance for the language application.
KW - Adaptation mechanism
KW - Big data
KW - Language service composition
KW - Parallel execution
UR - http://www.scopus.com/inward/record.url?scp=84961692502&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84961692502&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-31468-6_5
DO - 10.1007/978-3-319-31468-6_5
M3 - Conference contribution
AN - SCOPUS:84961692502
SN - 9783319314679
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 71
EP - 85
BT - Worldwide Language Service Infrastructure - 2nd International Workshop, WLSI 2015, Revised Selected Papers
A2 - Lin, Donghui
A2 - Murakami, Yohei
PB - Springer Verlag
T2 - 2nd International Workshop on Worldwide Language Service Infrastructure, WLSI 2015
Y2 - 22 January 2015 through 23 January 2015
ER -