Optimization-based data generation for photo enhancement

Mayu Omiya, Yusuke Horiuchi, Edgar Simo-Serra, Satoshi Iizuka, Hiroshi Ishikawa

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

The preparation of large amounts of high-quality training data has always been the bottleneck for the performance of supervised learning methods. It is especially time-consuming for complicated tasks such as photo enhancement. A recent approach to ease data annotation creates realistic training data automatically with optimization. In this paper, we improve upon this approach by learning image-similarity which, in combination with a Covariance Matrix Adaptation optimization method, allows us to create higher quality training data for enhancing photos. We evaluate our approach on challenging real world photo-enhancement images by conducting a perceptual user study, which shows that its performance compares favorably with existing approaches.

Original languageEnglish
Title of host publicationProceedings - 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019
PublisherIEEE Computer Society
Pages1890-1898
Number of pages9
ISBN (Electronic)9781728125060
DOIs
Publication statusPublished - 2019 Jun
Event32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019 - Long Beach, United States
Duration: 2019 Jun 162019 Jun 20

Publication series

NameIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Volume2019-June
ISSN (Print)2160-7508
ISSN (Electronic)2160-7516

Conference

Conference32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2019
Country/TerritoryUnited States
CityLong Beach
Period19/6/1619/6/20

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Optimization-based data generation for photo enhancement'. Together they form a unique fingerprint.

Cite this