This paper explores the applicability of conditional generative adversarial networks in audio-to-audio translation problems and proposes a neural network architecture capable of doing so. Recent advances have shown that causal convolutions can be effective for modeling raw audio when their kernel is dilated by many factors, in contrast to previous techniques that utilized recurrent approaches. Embedding such convolutions within a conditional GAN architecture allows the targeted generation of raw audio given a certain input. This architecture can then be used to learn and simulate certain translative operations applied to an input signal. This creates the defined problem of converting one audio signal into another, which has different characteristics. We also propose a novel discriminator structure for the evaluation of generated audio.