Research Project
![]() | DTSGAN: Learning Dynamic Textures via Spatiotemporal Generative Adversarial Networks Advisor: Prof. Ming-Hsuan Yang 1. Proposed a spatiotemporal generative network which learns dynamic textures from a single video clip. 2. Demonstrated the proposed algorithm performs favorably against state-of-the-art methods. 3. Designed an encoder that allows the unconditional model to transform an input frame into a video sequence. |
Course Project
![]() | CS 194-26 Final Project: Neural Style Transfer The works of Gatys et al. demonstrated the capability of Convolutional Neural Networks (CNNs) in creating artistic style images. This process of utilizing CNNs to transfer content images in different styles referred to as Neural Style Transfer (NST). In this work, we re-implement image-based NST, fast NST, arbitrary NST and human-to-anime face transfer. We also extend the algorithms to transfer daytime to night, mix different styles and create artistic style videos. Website |
![]() | Images of the Russian Empire We demonstrate a fully automated colorization approach for separating three color components and applying image processing and techniques to align them together and reproduce full-color images. Website |
![]() | Fun with Filters and Frequencies We implement gaussian filter and use it to straighten the image. We also blend different images together by utilizing the frequency. Website |
![]() | Face Morphing We implement face warping with triangulations, produce a caricature of our face and make a music video. Website |
![]() | [Auto]Stitching Photo Mosaics In the first part, we select the keypoints munually and blend three images together. In the second part, we implement ANMS, feature matching and RANSAC to automatically find the keypoints and blend the images into a panorama. Website |