Even after over a decade of many crowdsourcing researches, we have no standard framework for low-cost quality assurance in crowdsourced data annotation. This paper proposes an unsupervised learning method for dynamic microtask posting which allows each microtask to adjust their own number of collected responses based on the data difficulty. Since crowdsourced data labels are likely to contain errors, researchers often employ majority voting that aggregates responses from multiple workers to calculate a final l abel. T his t echnique, h owever, i nvolves a trade-off between label accuracy and cost. This paper presents a dynamic microtask posting model that reduces the total number of collected responses while maintaining the labeling accuracy; we also aim to obtain the model with an 'unsupervised' approach, which does not require training through experience of microtask posting for data labeled with ground-truths. Our simulation in annotating livestock surveillance images demonstrated that our approach achieved i) comparable learning performance to that of the supervised approach that required model training with labeled data, and ii) a significant c ost r eduction without degrading accuracy in comparison to simple majority voting.