Hi, I am Yen-Cheng Liu!

I am a research assistant in Vision and Learning Lab, supervised by Prof. Yu-Chiang Frank Wang. My research interests are in the areas of computer vision and machine learning, particularly representation learning and transfer learning.

Here are my CV and ResearchGate.
ycliu93 [AT]

M.S. in EE
National Taiwan University
Sep '15 - June '17

Graduate Research
Academia Sinica
May '16 - June '17

Erasmus Program
Technische Universität München
Sep '14 - Mar '15

B.S. in ECE
National Chiao Tung University
Sep '11 - June '15


Detach and Adapt: Learning Cross-Domain Disentangled Deep Representation

2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR'18 Spotlight)
Yen-Cheng Liu, Yu-Ying Yeh, Tzu-Chien Fu, Sheng-De Wang, Wei-Chen Chiu, Yu-Chiang Frank Wang
While representation learning aims to derive interpretable features for describing visual data, representation disentanglement further results in such features so that particular image attributes can be identified and manipulated. However, one cannot easily address this task without observing ground truth annotation for the training data. To address this problem, we propose a novel deep learning model of Cross-Domain Representation Disentangler (CDRD). By observing fully annotated source-domain data and unlabeled target-domain data of interest, our model bridges the information across data domains and transfers the attribute information accordingly. Thus, cross-domain joint feature disentanglement and adaptation can be jointly performed.

Anonymous Title

2018 European Conference on Computer Vision (ECCV'18; under review)
Yu-Ying Yeh, Yen-Cheng Liu, Wei-Chen Chiu, Yu-Chiang Frank Wang

Single-Image Depth Estimation with Semantics consistency

2018 IEEE Conference on Image Processing (ICIP'18; under review)
Po-Yi Chen, Yen-Cheng Liu, Yu-Ying Yeh, Yu-Chiang Frank Wang
We propose a weakly-supervised deep learning framework, which performs the above task without observing ground truth depth/disparity information during training. To learn the proposed network, we only require stereo image pairs and images with the associated foreground segmentation outputs. Moreover, our network exhibits abilities in exploiting and preserving semantics consistency across the above image data, so that improved depth estimation can be obtained.

Domain-Adaptive Generative Adversarial Networks for Sketch-to-Photo Inversion

IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP'17)
Yen-Cheng Liu, Wei-Chen Chiu, Sheng-De Wang, Yu-Chiang Frank Wang
In this paper, we propose a novel deep learning model of Domain-adaptive Generative Adversarial Networks (DA-GAN). The design of DA-GAN performs cross-style sketch-to-photo inversion, which mitigates the difference across input sketch styles without the need to collect a large number of sketch and face image pairs for training purposes.

Side Projects

Jenga Builder

Collaborator: Kilian Contamain, Edwinn Gamborino, Da-Wei Liu
The proposed system aim at rebuilding a Jenga® tower from a randomly positioned and relatively large set of pieces in a surface using only one RGB camera and a 6-DOF robotic manipulator Find Your Second-Hand Ticket

Collaborator: Jerry Lee @ Trend Micro Taiwan
We design a public website for secondhand ticket info exchange. Our back-end system collects the latest secondhand ticket information by integrating the government open data and PTT Drama-Ticket .