Controllable Image-to-Video Translation: A Case Study on Facial Expression Generation
Lijie Fan, Wenbing Huang, Chuang Gan et al.
We propose a user-controllable approach to generate video clips of various lengths from a single face image. The lengths and types of the expressions are controlled by users. To this end, we design a novel neural network architecture that can incorporate the user input into its skip connections and propose several improvements to the adversarial training method for the neural network. Experiments and user studies verify the effectiveness of our approach.