TY - JOUR
T1 - Computing machinery and creativity
T2 - Lessons learned from the Turing test
AU - Berrar, Daniel Peter
AU - Schuster, Alfons
PY - 2014/1
Y1 - 2014/1
N2 - Purpose: The purpose of this paper is to investigate the relevance and the appropriateness of Turing-style tests for computational creativity. Design/methodology/approach: The Turing test is both a milestone and a stumbling block in artificial intelligence (AI). For more than half a century, the "grand goal of passing the test" has taught the authors many lessons. Here, the authors analyze the relevance of these lessons for computational creativity. Findings: Like the burgeoning AI, computational creativity concerns itself with fundamental questions such as "Can machines be creative?" It is indeed possible to frame such questions as empirical, Turing-style tests. However, such tests entail a number of intricate and possibly unsolvable problems, which might easily lead the authors into old and new blind alleys. The authors propose an outline of an alternative testing procedure that is fundamentally different from Turing-style tests. This new procedure focuses on the unfolding of creativity over time, and - unlike Turing-style tests - it is amenable to a more meaningful statistical testing. Research limitations/implications: This paper argues against Turing-style tests for computational creativity. Practical implications: This paper opens a new avenue for viable and more meaningful testing procedures. Originality/value: The novel contributions are: an analysis of seven lessons from the Turing test for computational creativity; an argumentation against Turing-style tests; and a proposal of a new testing procedure.
AB - Purpose: The purpose of this paper is to investigate the relevance and the appropriateness of Turing-style tests for computational creativity. Design/methodology/approach: The Turing test is both a milestone and a stumbling block in artificial intelligence (AI). For more than half a century, the "grand goal of passing the test" has taught the authors many lessons. Here, the authors analyze the relevance of these lessons for computational creativity. Findings: Like the burgeoning AI, computational creativity concerns itself with fundamental questions such as "Can machines be creative?" It is indeed possible to frame such questions as empirical, Turing-style tests. However, such tests entail a number of intricate and possibly unsolvable problems, which might easily lead the authors into old and new blind alleys. The authors propose an outline of an alternative testing procedure that is fundamentally different from Turing-style tests. This new procedure focuses on the unfolding of creativity over time, and - unlike Turing-style tests - it is amenable to a more meaningful statistical testing. Research limitations/implications: This paper argues against Turing-style tests for computational creativity. Practical implications: This paper opens a new avenue for viable and more meaningful testing procedures. Originality/value: The novel contributions are: an analysis of seven lessons from the Turing test for computational creativity; an argumentation against Turing-style tests; and a proposal of a new testing procedure.
KW - Artificial intelligence
KW - Creativity
KW - Turing test
UR - http://www.scopus.com/inward/record.url?scp=84896303003&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84896303003&partnerID=8YFLogxK
U2 - 10.1108/K-08-2013-0175
DO - 10.1108/K-08-2013-0175
M3 - Article
AN - SCOPUS:84896303003
SN - 0368-492X
VL - 43
SP - 82
EP - 91
JO - Kybernetes
JF - Kybernetes
IS - 1
ER -