Research

Note: This “Research” page is no longer cutting-edge as I stopped pursuing this direction after my PhD days. It serves as book-keeping going forward.


I finished my Ph.D. in the Department of Computer Science at University of Washington, advised by Brian Curless, David Salesin and Michael Cohen.
My research addresses the challenge of realizing additional degree of realism from original input. This includes adding back motion into a single picture, enlarging the field of view of input videos, and enabling depth and parallax perception for images and panoramas. A list of my research projects can be found below.

Parallax PhotographyParallax Photography: Creating 3D Cinematic Effects from Stills [paper], [tr], [video], [slides]
with Alex Colburn, et al: Michael AJ Sweeney Award for Best Student Paper
We present an approach to convert a small portion of a light field with extracted depth into a cinematic effect with simulated, smooth camera motion exhibiting 3D parallax. We develop a taxonomy of cinematic conventions of these effects, distilled from documentary film footage and organized by the number of subjects. Our automatic, content-aware approach takes an input light field, identifies subjects using a face detector, and optimizes for a camera path that conforms to a cinematic convention, maximizes apparent parallax, and avoids missing information in the input.

Layered Depth PanoramasLayered Depth Panoramas [paper], [results]
with Sing Bing Kang, et al
Representations for interactive photorealistic visualization of scenes range from compact 2D panoramas to data-intensive 4D light fields. We propose a technique for creating a layered representation from a sparse set of images taken with a hand-held camera. This representation, which we call a layered depth panorama (LDP), allows the user to experience 3D by off-axis panning. It combines the compelling experience of panoramas with limited 3D navigation. We demonstrate our approach on a variety of complex outdoor and indoor scenes.

Light field CameraSpatio-Angular Resolution Tradeoff in Integral Photography [paper], [results]
with Todor Georgiev, et al
An integral camera samples the 4D light field of a scene within a single photograph. However, the resolution limit on image sensors becomes the largest barrier for capturing light field at high spatial and angular resolution simultaneously. For many real-world scenes, we argue that it is beneficial to trade angular resolution for higher spatial resolution. The missing angular resolution is then interpolated using techniques from computer vision. We have developed a prototype integral camera that follows this idea, and captured various light fields.

Piecewise RegistrationPiecewise Image registration in the Presence of Multiple Large Motions [paper]
with Pravin Bhat, Noah Snavely, et al
Many real-world scenes contain multiple objects that undergo independent motions. Two photographs of such a scene taken at different moments in time and different viewpoints will contain motions much larger than most optical flow techniques can handle these days. We estimate a dense correspondence between two images in the presence of multiple large motions (including camera motion as well as object motion) using a two stage algorithm. We demonstrate our method on a variety of scenes containing significantly large amount of motion.

Panoramic Video TexturesPanoramic Video Textures [paper], [video]
with Aseem Agarwala, Chris Pal, et al
From scratching to painting, from photography to videography, humans have tried numerous ways to capture and communicate a sense of this beauty. Among all these attempts, panoramic photograph and video textures are particularly effective. However, panoramic video texture, a combination of these two, is even more immersive given its large field of view and its endless motion, as it conveys a strong sense of “being there” – an truly enhanced visual experience of a scene. We present a system to capture, construct, and render this new media.

Animating PicturesAnimating Pictures with Stochastic Motion Textures [paper], [video]
with Yung-Yu Chuang, Dan Goldman, et al
We started this project without much confidence in animating single stills, yet the results are definitely promising. Despite the fact that the types of animatable elements are limited (tree branches, flowers, water, clouds, and boat that repond to wind) and some human intevention (matting, inpainting, and authoring) is needed, the subtle movements add significant liveness back to the still pictures. It even works for paintings of complex scenes. Simple as it is, I hope this opens the door to a much unexplored area of computer graphics research.

Motion CaptureMotion Capture from Single Camera Image Sequence
with Harlan Hile
We started with great ambition to capture the classic movements of Charlie Chaplin from old footage, yet the project didn’t bloom after two years endeavour. To summarize: the unknown space has too many degrees of freedom for any convex optimization method to work around the ambiguities. It’s a bitter experience, but it does make me grow. In any case, the approach as well as the not-well-baked results are still worth mentioning. I’ll come back to this some day. I promise!