请选择 进入手机版 | 继续访问电脑版

点云技术相关产学研社区

 找回密码
 立即注册加入PCL中国点云技术相关产学研社区

扫一扫,访问微社区

查看: 7497|回复: 5

kinfu的升级版本

[复制链接]
发表于 2015-12-1 22:36:47 | 显示全部楼层 |阅读模式
https://github.com/Nerei/kinfu_remake
KinFu remake
This is lightweight, reworked and optimized version of Kinfu that was originally shared in PCL in 2011.
Key changes/features:
  • Performance has been improved by 1.6x factor (Fermi-tested)
  • Code size is reduced drastically. Readability improved.
  • No hardcoded algorithm parameters! All of them can be changed at runtime (volume size, etc.)
  • The code is made independent from OpenCV GPU module and PCL library.
Dependencies:
  • Fermi or Kepler or newer
  • CUDA 5.0 or higher
  • OpenCV 2.4.9 with new Viz module (only opencv_core, opencv_highgui, opencv_imgproc, opencv_viz modules required). Make sure that WITH_VTK flag is enabled in CMake during OpenCV configuration.
  • OpenNI v1.5.4 (for Windows can download and install from http://pointclouds.org/downloads/windows.html)
Implicit dependency (needed by opencv_viz):
  • VTK 5.8.0 or higher. (apt-get install on linux, for windows please download and compile from www.vtk.org)


回复

使用道具 举报

 楼主| 发表于 2015-12-1 22:43:56 | 显示全部楼层
https://github.com/victorprad/InfiniTAM
InfiniTAM an Open Source, multi-platform framework for real-time, large-scale depth fusion and tracking, released under an Oxford Isis Innovation Academic License. The framework currently supports dense volumes (using an implementation based on the Newcombe et al KinectFusion paper) and sparse volumes (using an implementation based on our ISMAR 2015 paper).

We are part of the Oxford Active Vision Library.

News

18/09/2015 - InfiniTAM is now part of the Oxford Active Vision Library.

30/07/2015 - InfiniTAM v2 released: over 10 times faster than v1; iOS and Android versions; export to STL; many many other fixes and improvements.

Why use it?

InfiniTAM was designed with a focus on extensibility, reusability and portability:

Depending on the scene, processing runs at over 1000fps on a single NVIDIA Titan X graphics card and real-time on iOS (over 25fps) and NVIDIA K1-based Android devices (over 40fps).

The world is captured either densely (using a full 3D volume) or sparsely (using small voxel blocks indexed by a hash-table). The design of the framework also enables other representations, such as octrees, to be added easily.

InfiniTAM swaps memory between CPU and GPU in real-time, which allows for virtually infinite environments to be built.

We provide C++ code for both CPU and GPU implementations (NVIDIA CUDA and Apple Metal) and most of it is reused between the various implementations.

The framework allows for easy integration of components that either replace existing ones or extend the capabilities of the system (e.g. 3D object detection, new tracker, etc.).

The code compiles natively on Windows, Linux, Mac OS X, iOS and Android.

The core processing library has no dependencies for the CPU version. and only CUDA for the GPU one. The user interface requires only OpenGL and GLUT. Depth can be sourced from image files or, optionally, using OpenNI 2.

What to cite?

Please cite our ISMAR paper in your publications:

@article{InfiniTAM_ISMAR_2015,
author = {Kahler, O. and Prisacariu, V.~A. and Ren, C.~Y. and
          Sun, X. and Torr, P.~H.~S and Murray, D.~W.},
title = "{Very High Frame Rate Volumetric Integration of Depth Images on Mobile Device}",
journal = "{IEEE Transactions on Visualization and Computer Graphics
       (Proceedings International Symposium on Mixed and Augmented Reality 2015}",
volume = {22},
number = {11},
year = 2015
}
If you use the technical report version of InfiniTAM, please also cite:

@article{2014arXiv1410.0925P,
    author = {Prisacariu, V.~A. and Kahler, O. and Cheng, M.~M. and Ren, C.~Y.
        and Valentin, J. and Torr, P.~H.~S. and Reid, I.~D. and Murray, D.~W.},
    title = "{A Framework for the Volumetric Integration of Depth Images}",
    journal = {ArXiv e-prints},
    archivePrefix = "arXiv",
    eprint = {1410.0925},
    year = 2014
}
and the respective dense / sparse volumetric approach that you use.

@Inproceedings {export:155378,
    author = {Richard A. Newcombe and Shahram Izadi and
        Otmar Hilliges and David Molyneaux and David Kim and        
        Andrew J. Davison and Pushmeet Kohli and Jamie Shotton and
        Steve Hodges and Andrew Fitzgibbon},
    booktitle = {IEEE ISMAR},
    publisher = {IEEE},
    title = {KinectFusion: Real-Time Dense Surface Mapping and Tracking},
    year = {2011}
}
@article{niessner2013hashing,
    author = {Nie{\ss}ner, M. and Zollh\"ofer, M. and Izadi, S. and Stamminger, M.},
    title = {Real-time 3D Reconstruction at Scale using Voxel Hashing},
    journal = {ACM Transactions on Graphics (TOG)},
    publisher = {ACM},
    year = {2013}
}
回复 支持 反对

使用道具 举报

发表于 2015-12-3 17:39:56 | 显示全部楼层
升级版的在哪里有提高?
回复 支持 反对

使用道具 举报

 楼主| 发表于 2015-12-4 15:41:51 | 显示全部楼层
xiaoji2014 发表于 2015-12-3 17:39
升级版的在哪里有提高?

Performance has been improved by 1.6x factor (Fermi-tested)
Code size is reduced drastically. Readability improved.
No hardcoded algorithm parameters! All of them can be changed at runtime (volume size, etc.)
The code is made independent from OpenCV GPU module and PCL library.
回复 支持 反对

使用道具 举报

 楼主| 发表于 2015-12-4 20:27:48 | 显示全部楼层
回复 支持 反对

使用道具 举报

发表于 2016-12-6 15:11:37 | 显示全部楼层
你好!请问这个kinfu_remake怎么编译运行?
回复 支持 反对

使用道具 举报

本版积分规则

QQ|小黑屋|点云技术相关产学研社区 ( 陕ICP备13001629号 )

GMT+8, 2024-3-29 05:04 , Processed in 1.800358 second(s), 16 queries .

Powered by Discuz! X3.4

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表