We propose a variable reward scheme in decentralized multi-agent deep reinforcement learning for a sequential task consisting of a number of subtasks which can be completed when all subtasks are executed in a certain order before a deadline by agents with different capabilities. Developments in computer science and robotics are drawing attention to multi-agent systems for complex tasks. However, coordinated behavior among agents requires sophistication and is highly dependent on the structures of tasks and environments; thus, it is preferable to individually learn appropriate coordination depending on specific tasks. This study focuses on the learning of a sequential task by cooperative agents from a practical perspective. In such tasks, agents must learn both efficiency for their own subtasks and coordinated behavior for other agents because the former provides more chances for the subsequent agents to learn, while the latter facilitates the execution of subsequent subtasks. Our proposed reward scheme enables agents to learn these behaviors in a balanced manner. We then experimentally show that agents in the proposed reward scheme can achieve more efficient task execution compared to baseline methods based on static reward schemes. We also analyzed the learned coordinated behavior to see the reasons of efficiency.