DynIBaR Can Freeze Time
Written by David Conrad   
Sunday, 01 October 2023

DynIBar aka Neural Dynamic Image-Based Rendering is a new approach to synthesizing novel views from mobile phone video footage. Not only does the technique eliminate blur and shake, it can even do bullet time effects to freeze time while sweeping the camera around to highlight a dramatic moment.

DynIBar freeze

The paper “DynIBaR: Neural Dynamic Image-Based Rendering”, comes from Google Research and was awarded a best paper honorable mention at the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

To set the scene the researchers refer to recent advances in  computer vision techniques to reconstruct and render static (non-moving) 3D scenes but point out that  most of the videos people capture with their mobile devices depict moving objects, such as people, pets, and cars which lead to blurry inaccurate rendering when subject to standard view synthesis methods:

Referring to recent methods that use space-time neural radiance fields, such as Dynamic NeRFs developed at Cornell University by a team including some of the same researchers, we are told that such approaches still exhibit inherent limitations that prevent their application to casually captured, in-the-wild videos. In particular, they struggle to render high-quality novel views from videos featuring long time duration, uncontrolled camera paths and complex object motion. This is because of the need to store the entire moving scene in an MLP (MultiLayer Perceptron) data structure. The improvement achieved by DynIBaR is shown clearly here: 

The video effects achieved by DyniBaR include:

  • “Bullet time” effects - where time is paused and the camera is moved at a normal speed around a scene.
  • Video stabilization - produce smoother outputs with higher rendering fidelity and fewer artifacts, (e.g., flickering or blurry results)
  • Simultaneous view synthesis and slow motion -  can take video inputs and produce smooth 5X slow-motion videos rendered using novel camera paths.

  • Depth of field effects - generate high-quality video bokeh by synthesizing videos with dynamically changing depth of field.

All of these are demonstrated in this video:

More Information 

DynIBaR: Space-time view synthesis from videos of dynamic scenes

DynIBaR: Neural Dynamic Image-Based Rendering
by Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker and Noah Snavely

Related Articles

Generate 3D Flythroughs from Still Photos

Animating Flow In Still Photos

Synthesizing The Bigger Picture

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.



GitHub Introduces Code Scanning

GitHub has announced a public beta of a code scanner that automatically fixes problems. The new feature was announced back in November, but has now moved to public beta status.  

Spider Courtship Decoded by Machine Learning

Using machine learning to filter out unwanted sounds and to isolate the signals made by three species of wolf spider has not only contributed to an understanding of arachnid courtship behavior, b [ ... ]

More News

raspberry pi books



or email your comment to: comments@i-programmer.info



Last Updated ( Sunday, 01 October 2023 )