Training recurrent neural network language models (RNNLMs) requires a large amount of data, which is difficult to collect for specific domains such as multiparty conversations. Data augmentation using external resources and model adaptation, which adjusts a model trained on a large amount of data to a target domain, have been proposed for low-resource language modeling. While there are the commonalities and discrepancies between the source and target domains in terms of the statistics of words and their contexts, these methods for domain adaptation make the commonalities and discrepancies jumbled. We propose novel domain adaptation techniques for RNNLM by introducing domain-shared and domain-specific word embedding and contextual features. This explicit modeling of the commonalities and discrepancies would improve the language modeling performance. Experimental comparisons using multiparty conversation data as the target domain augmented by lecture data from the source domain demonstrate that the proposed domain adaptation method exhibits improvements in the perplexity and word error rate over the long short-term memory based language model (LSTMLM) trained using the source and target domain data.