https://github.com/victorprad/InfiniTAM
InfiniTAM an Open Source, multi-platform framework for real-time, large-scale depth fusion and tracking, released under an Oxford Isis Innovation Academic License. The framework currently supports dense volumes (using an implementation based on the Newcombe et al KinectFusion paper) and sparse volumes (using an implementation based on our ISMAR 2015 paper).
We are part of the Oxford Active Vision Library.
News
18/09/2015 - InfiniTAM is now part of the Oxford Active Vision Library.
30/07/2015 - InfiniTAM v2 released: over 10 times faster than v1; iOS and Android versions; export to STL; many many other fixes and improvements.
Why use it?
InfiniTAM was designed with a focus on extensibility, reusability and portability:
Depending on the scene, processing runs at over 1000fps on a single NVIDIA Titan X graphics card and real-time on iOS (over 25fps) and NVIDIA K1-based Android devices (over 40fps).
The world is captured either densely (using a full 3D volume) or sparsely (using small voxel blocks indexed by a hash-table). The design of the framework also enables other representations, such as octrees, to be added easily.
InfiniTAM swaps memory between CPU and GPU in real-time, which allows for virtually infinite environments to be built.
We provide C++ code for both CPU and GPU implementations (NVIDIA CUDA and Apple Metal) and most of it is reused between the various implementations.
The framework allows for easy integration of components that either replace existing ones or extend the capabilities of the system (e.g. 3D object detection, new tracker, etc.).
The code compiles natively on Windows, Linux, Mac OS X, iOS and Android.
The core processing library has no dependencies for the CPU version. and only CUDA for the GPU one. The user interface requires only OpenGL and GLUT. Depth can be sourced from image files or, optionally, using OpenNI 2.
What to cite?
Please cite our ISMAR paper in your publications:
@article{InfiniTAM_ISMAR_2015,
author = {Kahler, O. and Prisacariu, V.~A. and Ren, C.~Y. and
Sun, X. and Torr, P.~H.~S and Murray, D.~W.},
title = "{Very High Frame Rate Volumetric Integration of Depth Images on Mobile Device}",
journal = "{IEEE Transactions on Visualization and Computer Graphics
(Proceedings International Symposium on Mixed and Augmented Reality 2015}",
volume = {22},
number = {11},
year = 2015
}
If you use the technical report version of InfiniTAM, please also cite:
@article{2014arXiv1410.0925P,
author = {Prisacariu, V.~A. and Kahler, O. and Cheng, M.~M. and Ren, C.~Y.
and Valentin, J. and Torr, P.~H.~S. and Reid, I.~D. and Murray, D.~W.},
title = "{A Framework for the Volumetric Integration of Depth Images}",
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1410.0925},
year = 2014
}
and the respective dense / sparse volumetric approach that you use.
@Inproceedings {export:155378,
author = {Richard A. Newcombe and Shahram Izadi and
Otmar Hilliges and David Molyneaux and David Kim and
Andrew J. Davison and Pushmeet Kohli and Jamie Shotton and
Steve Hodges and Andrew Fitzgibbon},
booktitle = {IEEE ISMAR},
publisher = {IEEE},
title = {KinectFusion: Real-Time Dense Surface Mapping and Tracking},
year = {2011}
}
@article{niessner2013hashing,
author = {Nie{\ss}ner, M. and Zollh\"ofer, M. and Izadi, S. and Stamminger, M.},
title = {Real-time 3D Reconstruction at Scale using Voxel Hashing},
journal = {ACM Transactions on Graphics (TOG)},
publisher = {ACM},
year = {2013}
} |