为了利用现有的二维视觉领域的算法,有时需要将从三维点云生成二维合成图像,当然对于任意视角的,肯定用图形渲染技术,设置相机参数等。但对于深度摄像头保存的点云数据,比较特殊,其是有序点云,点云库pcl里面提供了一些相关的接口和案例
https://github.com/PointCloudLibrary/pcl/blob/master/tools/pcd2png.cpp
这里一定要注意,是有序的点云数据,对于删除了,nan值的rgbd点云数据,就不能用这些接口了。
PointCloudImageExtractor
以及其对应的多个子类,具体参考
https://pointclouds.org/documentation/classpcl_1_1io_1_1_point_cloud_image_extractor.html
- template <typename PointT> bool
- pcl::io::PointCloudImageExtractorFromNormalField<PointT>::extractImpl (const PointCloud& cloud, pcl::PCLImage& img) const
- {
- std::vector<pcl::PCLPointField> fields;
- int field_x_idx = pcl::getFieldIndex<PointT> ("normal_x", fields);
- int field_y_idx = pcl::getFieldIndex<PointT> ("normal_y", fields);
- int field_z_idx = pcl::getFieldIndex<PointT> ("normal_z", fields);
- if (field_x_idx == -1 || field_y_idx == -1 || field_z_idx == -1)
- return (false);
- const std::size_t offset_x = fields[field_x_idx].offset;
- const std::size_t offset_y = fields[field_y_idx].offset;
- const std::size_t offset_z = fields[field_z_idx].offset;
-
- img.encoding = "rgb8";
- img.width = cloud.width;
- img.height = cloud.height;
- img.step = img.width * sizeof (unsigned char) * 3;
- img.data.resize (img.step * img.height);
-
- for (std::size_t i = 0; i < cloud.size (); ++i)
- {
- float x;
- float y;
- float z;
- pcl::getFieldValue<PointT, float> (cloud[i], offset_x, x);
- pcl::getFieldValue<PointT, float> (cloud[i], offset_y, y);
- pcl::getFieldValue<PointT, float> (cloud[i], offset_z, z);
- img.data[i * 3 + 0] = static_cast<unsigned char>((x + 1.0) * 127);
- img.data[i * 3 + 1] = static_cast<unsigned char>((y + 1.0) * 127);
- img.data[i * 3 + 2] = static_cast<unsigned char>((z + 1.0) * 127);
- }
-
- return (true);
- }
复制代码 以normal对应的类为例,实现比较简单,基本上就是将对应索引的normal直接赋值给图像上对应的像素值
|