DynIBaR Can Freeze Time
Written by David Conrad   
Sunday, 01 October 2023

DynIBar aka Neural Dynamic Image-Based Rendering is a new approach to synthesizing novel views from mobile phone video footage. Not only does the technique eliminate blur and shake, it can even do bullet time effects to freeze time while sweeping the camera around to highlight a dramatic moment.

DynIBar freeze

The paper “DynIBaR: Neural Dynamic Image-Based Rendering”, comes from Google Research and was awarded a best paper honorable mention at the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

To set the scene the researchers refer to recent advances in  computer vision techniques to reconstruct and render static (non-moving) 3D scenes but point out that  most of the videos people capture with their mobile devices depict moving objects, such as people, pets, and cars which lead to blurry inaccurate rendering when subject to standard view synthesis methods:

Referring to recent methods that use space-time neural radiance fields, such as Dynamic NeRFs developed at Cornell University by a team including some of the same researchers, we are told that such approaches still exhibit inherent limitations that prevent their application to casually captured, in-the-wild videos. In particular, they struggle to render high-quality novel views from videos featuring long time duration, uncontrolled camera paths and complex object motion. This is because of the need to store the entire moving scene in an MLP (MultiLayer Perceptron) data structure. The improvement achieved by DynIBaR is shown clearly here: 

The video effects achieved by DyniBaR include:

  • “Bullet time” effects - where time is paused and the camera is moved at a normal speed around a scene.
  • Video stabilization - produce smoother outputs with higher rendering fidelity and fewer artifacts, (e.g., flickering or blurry results)
  • Simultaneous view synthesis and slow motion -  can take video inputs and produce smooth 5X slow-motion videos rendered using novel camera paths.

  • Depth of field effects - generate high-quality video bokeh by synthesizing videos with dynamically changing depth of field.

All of these are demonstrated in this video:

More Information 

DynIBaR: Space-time view synthesis from videos of dynamic scenes

DynIBaR: Neural Dynamic Image-Based Rendering
by Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker and Noah Snavely

Related Articles

Generate 3D Flythroughs from Still Photos

Animating Flow In Still Photos

Synthesizing The Bigger Picture

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


Can An LLM Hear And See?
14/05/2025

Large Language Models are fascinating and are probably practically important, but if you know how they work what they manage to achieve is remarkable. Is it possible that a language model can both see [ ... ]



Java Turns 30
25/05/2025

Sun Microsystems announced Java at the SunWorld '95 convention on May 23rd, 1995. At the time, Java was described as a programming language that, combined with the HotJava World Wide Web browser, offe [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

 

 

Last Updated ( Sunday, 01 October 2023 )