RiFeGAN2: Rich Feature Generation for Text-to-Image Synthesis from Constrained Prior Knowledge

Jun Cheng, Fuxiang Wu*, Yanling Tian, Lei Wang, Dapeng Tao

*この研究の対応する著者

研究成果: Article査読

4 被引用数 (Scopus)

抄録

Text-to-image synthesis is a challenging task that generates realistic images from a textual description. The description contains limited information compared with the corresponding image and is ambiguous and abstract, which will complicate the generation and lead to low-quality images. To address this problem, we propose a novel generation text-to-image synthesis method, called RiFeGAN2, to enrich the given description. To improve the enrichment quality while accelerating the enrichment process, RiFeGAN2 exploits a domain-specific constrained model to limit the search scope and then uses an attention-based caption matching model to refine the compatible candidate captions based on constrained prior knowledge. To improve the semantic consistency between the given description and the synthesized results, RiFeGAN2 employs improved SAEMs, SAEM2s, to compact better features of the retrieved captions and effectively emphasize the descriptions via incorporating centre-attention layers. Finally, multi-caption attentional GANs are exploited to synthesize images from those features. Experiments performed on widely-used datasets show that the models can generate vivid images from enriched captions and effectually improve the semantic consistency.

本文言語English
ページ(範囲)5187-5200
ページ数14
ジャーナルIEEE Transactions on Circuits and Systems for Video Technology
32
8
DOI
出版ステータスPublished - 2022 8月 1
外部発表はい

ASJC Scopus subject areas

  • メディア記述
  • 電子工学および電気工学

フィンガープリント

「RiFeGAN2: Rich Feature Generation for Text-to-Image Synthesis from Constrained Prior Knowledge」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル