Deep Learning

Video Stabilization Algorithm

전두선 2020. 2. 6. 01:56

1. Mesh-Flow-Video-Stabilization

https://github.com/sudheerachary/Mesh-Flow-Video-Stabilization

http://www.liushuaicheng.org/eccv2016/meshflow.pdf

 

The MeshFlow is a spatial smooth sparse motion field with motion vectors only at the mesh vertexes. The MeshFlow is produced by assigning each vertex an unique motion vector via two median filters. The path smoothing is conducted on the vertex profiles, which are motion vectors collected at the same vertex location in the MeshFlow over time. The profiles are smoothed adaptively by a novel smoothing technique, namely the Predicted Adaptive Path Smoothing (PAPS), which only uses motions from the past.

 

  • python 2
  • opencv
  • 제공된 데이터 외에 보유한 데이터로도 결과 확인

 

 

 

 


 

2. deep-online-video-stabilization-deploy

https://github.com/cxjyxxme/deep-online-video-stabilization-deploy

https://ieeexplore.ieee.org/document/8554287

 

Abstract:

Video stabilization techniques are essential for most hand-held captured videos due to high-frequency shakes. Several 2D-, 2.5D-, and 3D-based stabilization techniques have been presented previously, but to the best of our knowledge, no solutions based on deep neural networks had been proposed to date. The main reason for this omission is shortage in training data as well as the challenge of modeling the problem using neural networks. In this paper, we present a video stabilization technique using a convolutional neural network. Previous works usually propose an off-line algorithm that smoothes a holistic camera path based on feature matching. Instead, we focus on low-latency, real-time camera path smoothing that does not explicitly represent the camera path and does not use future frames. Our neural network model, called StabNet, learns a set of mesh-grid transformations progressively for each input frame from the previous set of stabilized camera frames and creates stable corresponding latent camera paths implicitly. To train the network, we collect a dataset of synchronized steady and unsteady video pairs via a specially designed hand-held hardware. Experimental results show that our proposed online method performs comparatively to the traditional off-line video stabilization methods without using future frames while running about 10 times faster. More importantly, our proposed StabNet is able to handle low-quality videos, such as night-scene videos, watermarked videos, blurry videos, and noisy videos, where the existing methods fail in feature extraction or matching.

 

  • python3
  • deep-learning
  • real-time
  • NVIDIA GPU(12G or 24G memory) + CUDA + CuDNN
  • tensorflow-gpu==1.13.0
  • numpy
  • ...
  • 제공된 데이터로만 테스트 완료. 보유데이터로는 아직 못함

 

 

 

 


3. Python Video Stabilization

https://github.com/AdamSpannbauer/python_video_stab

 

  • python 3
  • opencv
  • 제공된 데이터 외에 보유한 데이터로도 결과 확인

testing

from vidstab import VidStab

stabilizer = VidStab()

# black borders
stabilizer.stabilize(input_path='input_video1.avi', 
                     output_path='result/black.avi', 
                     border_type='black')

# filled in borders
stabilizer = VidStab()
stabilizer.stabilize(input_path='input_video1.avi', 
                     output_path='result/reflect.avi', 
                     border_type='reflect')

 

 

 

 

input

 

 

 

black

reflect

 

 

'Deep Learning' 카테고리의 다른 글

CNN을 활용한 Traffic Sign Detection  (0) 2020.02.26
Mask R-CNN balloon Training  (1) 2020.02.21
Detection algorithm  (0) 2020.02.06