We investigated the emergence of conventions for conflict resolutions in agent networks with various structures through pair wise reinforcement learning. Whereas coordinated agents encounter conflict situations in the course of actions, their resolutions are complex and computationally expensive due to mutual analysis of subsequent actions by both agents and communication costs of the interactions. Norms and conventions are expected to reduce these costs by regulating agent actions in recurrent conflicts. This paper describes a typical conflict situation using a Markov game and we investigated whether or not agents with a certain attitude to conflicts could learn the conventions of agent networks that had complex structures. We first examined the emergence of conventions and their characteristics in fully connected networks. Then, we compared them with the results from other agent network structures such as BA and CNN networks. We found the network structure strongly affected their emergence and the agents could sometimes learn no conventions although they could learn locally consistent actions for resolutions.