Main MRPT website > C++ reference for MRPT 1.5.7
obs/CObservation3DRangeScan.h
Go to the documentation of this file.
1 /* +---------------------------------------------------------------------------+
2  | Mobile Robot Programming Toolkit (MRPT) |
3  | http://www.mrpt.org/ |
4  | |
5  | Copyright (c) 2005-2017, Individual contributors, see AUTHORS file |
6  | See: http://www.mrpt.org/Authors - All rights reserved. |
7  | Released under BSD License. See details in http://www.mrpt.org/License |
8  +---------------------------------------------------------------------------+ */
9 #ifndef CObservation3DRangeScan_H
10 #define CObservation3DRangeScan_H
11 
13 #include <mrpt/utils/CImage.h>
14 #include <mrpt/obs/CObservation.h>
17 #include <mrpt/poses/CPose3D.h>
18 #include <mrpt/poses/CPose2D.h>
19 #include <mrpt/math/CPolygon.h>
20 #include <mrpt/math/CMatrix.h>
21 #include <mrpt/utils/TEnumType.h>
22 #include <mrpt/utils/adapters.h>
25 
26 namespace mrpt
27 {
28 namespace obs
29 {
30  /** Used in CObservation3DRangeScan::project3DPointsFromDepthImageInto() */
32  {
33  bool takeIntoAccountSensorPoseOnRobot; //!< (Default: false) If false, local (sensor-centric) coordinates of points are generated. Otherwise, points are transformed with \a sensorPose. Furthermore, if provided, those coordinates are transformed with \a robotPoseInTheWorld
34  const mrpt::poses::CPose3D *robotPoseInTheWorld; //!< (Default: NULL) Read takeIntoAccountSensorPoseOnRobot
35  bool PROJ3D_USE_LUT; //!< (Default:true) [Only used when `range_is_depth`=true] Whether to use a Look-up-table (LUT) to speed up the conversion. It's thread safe in all situations <b>except</b> when you call this method from different threads <b>and</b> with different camera parameter matrices. In all other cases, it is a good idea to left it enabled.
36  bool USE_SSE2; //!< (Default:true) If possible, use SSE2 optimized code.
37  bool MAKE_DENSE; //!< (Default:true) set to false if you want to preserve the organization of the point cloud
38  /** (Default:1) If !=1, split the range image in blocks of DxD
39  * (D=decimation), and only generates one point per block, with the minimum
40  * valid range. */
42  T3DPointsProjectionParams() : takeIntoAccountSensorPoseOnRobot(false), robotPoseInTheWorld(NULL), PROJ3D_USE_LUT(true),USE_SSE2(true), MAKE_DENSE(true), decimation(1)
43  {}
44  };
45  /** Used in CObservation3DRangeScan::convertTo2DScan() */
47  {
48  std::string sensorLabel; //!< The sensor label that will have the newly created observation.
49  double angle_sup, angle_inf; //!< (Default=5 degrees) [Only if use_origin_sensor_pose=false] The upper & lower half-FOV angle (in radians).
50  double z_min,z_max; //!< (Default:-inf, +inf) [Only if use_origin_sensor_pose=true] Only obstacle points with Z coordinates within the range [z_min,z_max] will be taken into account.
51  double oversampling_ratio; //!< (Default=1.2=120%) How many more laser scans rays to create (read docs for CObservation3DRangeScan::convertTo2DScan()).
52 
53  /** (Default:false) If `false`, the conversion will be such that the 2D observation pose on the robot coincides with that in the original 3D range scan.
54  * If `true`, the sensed points will be "reprojected" as seen from a sensor pose at the robot/vehicle frame origin (and angle_sup, angle_inf will be ignored) */
56 
58  };
59 
60  namespace detail {
61  // Implemented in CObservation3DRangeScan_project3D_impl.h
62  template <class POINTMAP>
63  void project3DPointsFromDepthImageInto(mrpt::obs::CObservation3DRangeScan & src_obs,POINTMAP & dest_pointcloud, const mrpt::obs::T3DPointsProjectionParams & projectParams, const mrpt::obs::TRangeImageFilterParams &filterParams);
64  }
65 
67 
68  /** Declares a class derived from "CObservation" that encapsules a 3D range scan measurement, as from a time-of-flight range camera or any other RGBD sensor.
69  *
70  * This kind of observations can carry one or more of these data fields:
71  * - 3D point cloud (as float's).
72  * - Each 3D point has its associated (u,v) pixel coordinates in \a points3D_idxs_x & \a points3D_idxs_y (New in MRPT 1.4.0)
73  * - 2D range image (as a matrix): Each entry in the matrix "rangeImage(ROW,COLUMN)" contains a distance or a depth (in meters), depending on \a range_is_depth.
74  * - 2D intensity (grayscale or RGB) image (as a mrpt::utils::CImage): For SwissRanger cameras, a logarithmic A-law compression is used to convert the original 16bit intensity to a more standard 8bit graylevel.
75  * - 2D confidence image (as a mrpt::utils::CImage): For each pixel, a 0x00 and a 0xFF mean the lowest and highest confidence levels, respectively.
76  * - Semantic labels: Stored as a matrix of bitfields, each bit having a user-defined meaning.
77  *
78  * The coordinates of the 3D point cloud are in meters with respect to the depth camera origin of coordinates
79  * (in SwissRanger, the front face of the camera: a small offset ~1cm in front of the physical focal point),
80  * with the +X axis pointing forward, +Y pointing left-hand and +Z pointing up. By convention, a 3D point with its coordinates set to (0,0,0), will be considered as invalid.
81  * The field CObservation3DRangeScan::relativePoseIntensityWRTDepth describes the change of coordinates from
82  * the depth camera to the intensity (RGB or grayscale) camera. In a SwissRanger camera both cameras coincide,
83  * so this pose is just a rotation (0,0,0,-90deg,0,-90deg). But in
84  * Microsoft Kinect there is also an offset, as shown in this figure:
85  *
86  * <div align=center>
87  * <img src="CObservation3DRangeScan_figRefSystem.png">
88  * </div>
89  *
90  * In any case, check the field \a relativePoseIntensityWRTDepth, or the method \a doDepthAndIntensityCamerasCoincide()
91  * to determine if both frames of reference coincide, since even for Kinect cameras both can coincide if the images
92  * have been rectified.
93  *
94  * The 2D images and matrices are stored as common images, with an up->down rows order and left->right, as usual.
95  * Optionally, the intensity and confidence channels can be set to delayed-load images for off-rawlog storage so it saves
96  * memory by having loaded in memory just the needed images. See the methods load() and unload().
97  * Due to the intensive storage requirements of this kind of observations, this observation is the only one in MRPT
98  * for which it's recommended to always call "load()" and "unload()" before and after using the observation, *ONLY* when
99  * the observation was read from a rawlog dataset, in order to make sure that all the externally stored data fields are
100  * loaded and ready in memory.
101  *
102  * Classes that grab observations of this type are:
103  * - mrpt::hwdrivers::CSwissRanger3DCamera
104  * - mrpt::hwdrivers::CKinect
105  * - mrpt::hwdrivers::COpenNI2Sensor
106  *
107  * There are two sets of calibration parameters (see mrpt::vision::checkerBoardStereoCalibration() or the ready-to-use GUI program <a href="http://www.mrpt.org/Application:kinect-calibrate" >kinect-calibrate</a>):
108  * - cameraParams: Projection parameters of the depth camera.
109  * - cameraParamsIntensity: Projection parameters of the intensity (gray-level or RGB) camera.
110  *
111  * In some cameras, like SwissRanger, both are the same. It is possible in Kinect to rectify the range images such both cameras
112  * seem to coincide and then both sets of camera parameters will be identical.
113  *
114  * Range data can be interpreted in two different ways depending on the 3D camera (this field is already set to the
115  * correct setting when grabbing observations from an mrpt::hwdrivers sensor):
116  * - range_is_depth=true -> Kinect-like ranges: entries of \a rangeImage are distances along the +X axis
117  * - range_is_depth=false -> Ranges in \a rangeImage are actual distances in 3D.
118  *
119  * The "intensity" channel may come from different channels in sesnsors as Kinect. Look at field \a intensityImageChannel to
120  * find out if the image was grabbed from the visible (RGB) or IR channels.
121  *
122  * 3D point clouds can be generated at any moment after grabbing with CObservation3DRangeScan::project3DPointsFromDepthImage() and CObservation3DRangeScan::project3DPointsFromDepthImageInto(), provided the correct
123  * calibration parameters. Note that project3DPointsFromDepthImage() will store the point cloud in sensor-centric local coordinates. Use project3DPointsFromDepthImageInto() to directly obtain vehicle or world coordinates.
124  *
125  * Example of how to assign labels to pixels (for object segmentation, semantic information, etc.):
126  *
127  * \code
128  * // Assume obs of type CObservation3DRangeScanPtr
129  * obs->pixelLabels = CObservation3DRangeScan::TPixelLabelInfoPtr( new CObservation3DRangeScan::TPixelLabelInfo<NUM_BYTES>() );
130  * obs->pixelLabels->setSize(ROWS,COLS);
131  * obs->pixelLabels->setLabel(col,row, label_idx); // label_idxs = [0,2^NUM_BYTES-1]
132  * //...
133  * \endcode
134  *
135  * \note Starting at serialization version 2 (MRPT 0.9.1+), the confidence channel is stored as an image instead of a matrix to optimize memory and disk space.
136  * \note Starting at serialization version 3 (MRPT 0.9.1+), the 3D point cloud and the rangeImage can both be stored externally to save rawlog space.
137  * \note Starting at serialization version 5 (MRPT 0.9.5+), the new field \a range_is_depth
138  * \note Starting at serialization version 6 (MRPT 0.9.5+), the new field \a intensityImageChannel
139  * \note Starting at serialization version 7 (MRPT 1.3.1+), new fields for semantic labeling
140  * \note Since MRPT 1.5.0, external files format can be selected at runtime with `CObservation3DRangeScan::EXTERNALS_AS_TEXT`
141  *
142  * \sa mrpt::hwdrivers::CSwissRanger3DCamera, mrpt::hwdrivers::CKinect, CObservation
143  * \ingroup mrpt_obs_grp
144  */
146  {
147  // This must be added to any CSerializable derived class:
149 
150  protected:
151  bool m_points3D_external_stored; //!< If set to true, m_points3D_external_file is valid.
152  std::string m_points3D_external_file; //!< 3D points are in CImage::IMAGES_PATH_BASE+<this_file_name>
153 
154  bool m_rangeImage_external_stored; //!< If set to true, m_rangeImage_external_file is valid.
155  std::string m_rangeImage_external_file; //!< rangeImage is in CImage::IMAGES_PATH_BASE+<this_file_name>
156 
157  public:
158  CObservation3DRangeScan( ); //!< Default constructor
159  virtual ~CObservation3DRangeScan( ); //!< Destructor
160 
161  /** @name Delayed-load manual control methods.
162  @{ */
163  /** Makes sure all images and other fields which may be externally stored are loaded in memory.
164  * Note that for all CImages, calling load() is not required since the images will be automatically loaded upon first access, so load() shouldn't be needed to be called in normal cases by the user.
165  * If all the data were alredy loaded or this object has no externally stored data fields, calling this method has no effects.
166  * \sa unload
167  */
168  virtual void load() const MRPT_OVERRIDE;
169  /** Unload all images, for the case they being delayed-load images stored in external files (othewise, has no effect).
170  * \sa load
171  */
172  virtual void unload() MRPT_OVERRIDE;
173  /** @} */
174 
175  /** Project the RGB+D images into a 3D point cloud (with color if the target map supports it) and optionally at a given 3D pose.
176  * The 3D point coordinates are computed from the depth image (\a rangeImage) and the depth camera camera parameters (\a cameraParams).
177  * There exist two set of formulas for projecting the i'th point, depending on the value of "range_is_depth".
178  * In all formulas below, "rangeImage" is the matrix of ranges and the pixel coordinates are (r,c).
179  *
180  * 1) [range_is_depth=true] With "range equals depth" or "Kinect-like depth mode": the range values
181  * are in fact distances along the "+X" axis, not real 3D ranges (this is the way Kinect reports ranges):
182  *
183  * \code
184  * x(i) = rangeImage(r,c)
185  * y(i) = (r_cx - c) * x(i) / r_fx
186  * z(i) = (r_cy - r) * x(i) / r_fy
187  * \endcode
188  *
189  *
190  * 2) [range_is_depth=false] With "normal ranges": range means distance in 3D. This must be set when
191  * processing data from the SwissRange 3D camera, among others.
192  *
193  * \code
194  * Ky = (r_cx - c)/r_fx
195  * Kz = (r_cy - r)/r_fy
196  *
197  * x(i) = rangeImage(r,c) / sqrt( 1 + Ky^2 + Kz^2 )
198  * y(i) = Ky * x(i)
199  * z(i) = Kz * x(i)
200  * \endcode
201  *
202  * The color of each point is determined by projecting the 3D local point into the RGB image using \a cameraParamsIntensity.
203  *
204  * By default the local (sensor-centric) coordinates of points are directly stored into the local map, but if indicated so in \a takeIntoAccountSensorPoseOnRobot
205  * the points are transformed with \a sensorPose. Furthermore, if provided, those coordinates are transformed with \a robotPoseInTheWorld
206  *
207  * \tparam POINTMAP Supported maps are all those covered by mrpt::utils::PointCloudAdapter (mrpt::maps::CPointsMap and derived, mrpt::opengl::CPointCloudColoured, PCL point clouds,...)
208  *
209  * \note In MRPT < 0.9.5, this method always assumes that ranges were in Kinect-like format.
210  */
211  template <class POINTMAP>
212  inline void project3DPointsFromDepthImageInto(POINTMAP & dest_pointcloud, const T3DPointsProjectionParams & projectParams, const TRangeImageFilterParams &filterParams = TRangeImageFilterParams() ) {
213  detail::project3DPointsFromDepthImageInto<POINTMAP>(*this,dest_pointcloud,projectParams, filterParams);
214  }
215 
216  template <class POINTMAP>
217  MRPT_DEPRECATED("DEPRECATED: Use the other method signature with structured parameters instead.")
219  POINTMAP & dest_pointcloud,
220  const bool takeIntoAccountSensorPoseOnRobot,
221  const mrpt::poses::CPose3D *robotPoseInTheWorld=NULL,
222  const bool PROJ3D_USE_LUT=true,
223  const mrpt::math::CMatrix * rangeMask_min = NULL
224  )
225  {
227  pp.takeIntoAccountSensorPoseOnRobot = takeIntoAccountSensorPoseOnRobot;
228  pp.robotPoseInTheWorld = robotPoseInTheWorld;
229  pp.PROJ3D_USE_LUT = PROJ3D_USE_LUT;
231  fp.rangeMask_min=rangeMask_min;
232  detail::project3DPointsFromDepthImageInto<POINTMAP>(*this,dest_pointcloud,pp,fp);
233  }
234 
235  /** This method is equivalent to \c project3DPointsFromDepthImageInto() storing the projected 3D points (without color, in local sensor-centric coordinates) in this same class.
236  * For new code it's recommended to use instead \c project3DPointsFromDepthImageInto() which is much more versatile. */
237  inline void project3DPointsFromDepthImage(const bool PROJ3D_USE_LUT=true) {
239  p.takeIntoAccountSensorPoseOnRobot = false;
240  p.PROJ3D_USE_LUT = PROJ3D_USE_LUT;
241  this->project3DPointsFromDepthImageInto(*this,p);
242  }
243 
244  /** Convert this 3D observation into an "equivalent 2D fake laser scan", with a configurable vertical FOV.
245  *
246  * The result is a 2D laser scan with more "rays" (N) than columns has the 3D observation (W), exactly: N = W * oversampling_ratio.
247  * This oversampling is required since laser scans sample the space at evenly-separated angles, while
248  * a range camera follows a tangent-like distribution. By oversampling we make sure we don't leave "gaps" unseen by the virtual "2D laser".
249  *
250  * All obstacles within a frustum are considered and the minimum distance is kept in each direction.
251  * The horizontal FOV of the frustum is automatically computed from the intrinsic parameters, but the
252  * vertical FOV must be provided by the user, and can be set to be assymetric which may be useful
253  * depending on the zone of interest where to look for obstacles.
254  *
255  * All spatial transformations are riguorosly taken into account in this class, using the depth camera
256  * intrinsic calibration parameters.
257  *
258  * The timestamp of the new object is copied from the 3D object.
259  * Obviously, a requisite for calling this method is the 3D observation having range data,
260  * i.e. hasRangeImage must be true. It's not needed to have RGB data nor the raw 3D point clouds
261  * for this method to work.
262  *
263  * If `scanParams.use_origin_sensor_pose` is `true`, the points will be projected to 3D and then reprojected
264  * as seen from a different sensorPose at the vehicle frame origin. Otherwise (the default), the output 2D observation will share the sensorPose of the input 3D scan
265  * (using a more efficient algorithm that avoids trigonometric functions).
266  *
267  * \param[out] out_scan2d The resulting 2D equivalent scan.
268  *
269  * \sa The example in http://www.mrpt.org/tutorials/mrpt-examples/example-kinect-to-2d-laser-demo/
270  */
271  void convertTo2DScan(mrpt::obs::CObservation2DRangeScan & out_scan2d, const T3DPointsTo2DScanParams &scanParams, const TRangeImageFilterParams &filterParams = TRangeImageFilterParams() );
272 
273  MRPT_DEPRECATED("DEPRECATED: Use the other method signature with structured parameters instead.")
274  void convertTo2DScan(
275  mrpt::obs::CObservation2DRangeScan & out_scan2d,
276  const std::string & sensorLabel,
277  const double angle_sup = mrpt::utils::DEG2RAD(5),
278  const double angle_inf = mrpt::utils::DEG2RAD(5),
279  const double oversampling_ratio = 1.2,
280  const mrpt::math::CMatrix * rangeMask_min = NULL
281  );
282 
283  /** Whether external files (3D points, range and confidence) are to be
284  * saved as `.txt` text files (MATLAB compatible) or `*.bin` binary (faster).
285  * Loading always will determine the type by inspecting the file extension.
286  * \note Default=false
287  **/
288  static bool EXTERNALS_AS_TEXT;
289 
290  /** \name Point cloud
291  * @{ */
292  bool hasPoints3D; //!< true means the field points3D contains valid data.
293  std::vector<float> points3D_x,points3D_y,points3D_z; //!< If hasPoints3D=true, the (X,Y,Z) coordinates of the 3D point cloud detected by the camera. \sa resizePoints3DVectors
294  std::vector<uint16_t> points3D_idxs_x, points3D_idxs_y; //!< //!< If hasPoints3D=true, the (x,y) pixel coordinates for each (X,Y,Z) point in \a points3D_x, points3D_y, points3D_z
295 
296  /** Use this method instead of resizing all three \a points3D_x, \a points3D_y & \a points3D_z to allow the usage of the internal memory pool. */
297  void resizePoints3DVectors(const size_t nPoints);
298  /** @} */
299 
300  /** \name Point cloud external storage functions
301  * @{ */
302  inline bool points3D_isExternallyStored() const { return m_points3D_external_stored; }
303  inline std::string points3D_getExternalStorageFile() const { return m_points3D_external_file; }
304  void points3D_getExternalStorageFileAbsolutePath(std::string &out_path) const;
306  std::string tmp;
307  points3D_getExternalStorageFileAbsolutePath(tmp);
308  return tmp;
309  }
310  void points3D_convertToExternalStorage( const std::string &fileName, const std::string &use_this_base_dir ); //!< Users won't normally want to call this, it's only used from internal MRPT programs. \sa EXTERNALS_AS_TEXT
311  /** @} */
312 
313  /** \name Range (depth) image
314  * @{ */
315  bool hasRangeImage; //!< true means the field rangeImage contains valid data
316  mrpt::math::CMatrix rangeImage; //!< If hasRangeImage=true, a matrix of floats with the range data as captured by the camera (in meters) \sa range_is_depth
317  bool range_is_depth; //!< true: Kinect-like ranges: entries of \a rangeImage are distances along the +X axis; false: Ranges in \a rangeImage are actual distances in 3D.
318 
319  void rangeImage_setSize(const int HEIGHT, const int WIDTH); //!< Similar to calling "rangeImage.setSize(H,W)" but this method provides memory pooling to speed-up the memory allocation.
320  /** @} */
321 
322  /** \name Range Matrix external storage functions
323  * @{ */
324  inline bool rangeImage_isExternallyStored() const { return m_rangeImage_external_stored; }
325  inline std::string rangeImage_getExternalStorageFile() const { return m_rangeImage_external_file; }
326  void rangeImage_getExternalStorageFileAbsolutePath(std::string &out_path) const;
328  std::string tmp;
329  rangeImage_getExternalStorageFileAbsolutePath(tmp);
330  return tmp;
331  }
332  void rangeImage_convertToExternalStorage( const std::string &fileName, const std::string &use_this_base_dir ); //!< Users won't normally want to call this, it's only used from internal MRPT programs. \sa EXTERNALS_AS_TEXT
333  /** Forces marking this observation as non-externally stored - it doesn't anything else apart from reseting the corresponding flag (Users won't normally want to call this, it's only used from internal MRPT programs) */
334  void rangeImage_forceResetExternalStorage() { m_rangeImage_external_stored=false; }
335  /** @} */
336 
337 
338  /** \name Intensity (RGB) channels
339  * @{ */
340  /** Enum type for intensityImageChannel */
342  {
343  CH_VISIBLE = 0, //!< Grayscale or RGB visible channel of the camera sensor.
344  CH_IR = 1 //!< Infrarred (IR) channel
345  };
346 
347  bool hasIntensityImage; //!< true means the field intensityImage contains valid data
348  mrpt::utils::CImage intensityImage; //!< If hasIntensityImage=true, a color or gray-level intensity image of the same size than "rangeImage"
349  TIntensityChannelID intensityImageChannel; //!< The source of the intensityImage; typically the visible channel \sa TIntensityChannelID
350  /** @} */
351 
352  /** \name Confidence "channel"
353  * @{ */
354  bool hasConfidenceImage; //!< true means the field confidenceImage contains valid data
355  mrpt::utils::CImage confidenceImage; //!< If hasConfidenceImage=true, an image with the "confidence" value [range 0-255] as estimated by the capture drivers.
356  /** @} */
357 
358  /** \name Pixel-wise classification labels (for semantic labeling, etc.)
359  * @{ */
360  /** Returns true if the field CObservation3DRangeScan::pixelLabels contains a non-NULL smart pointer.
361  * To enhance a 3D point cloud with labeling info, just assign an appropiate object to \a pixelLabels
362  */
363  bool hasPixelLabels() const { return !(!pixelLabels); }
364 
365  /** Virtual interface to all pixel-label information structs. See CObservation3DRangeScan::pixelLabels */
367  {
368  typedef std::map<uint32_t,std::string> TMapLabelID2Name;
369 
370  /** The 'semantic' or human-friendly name of the i'th bit in pixelLabels(r,c) can be found in pixelLabelNames[i] as a std::string */
372 
373  const std::string & getLabelName(unsigned int label_idx) const {
374  std::map<uint32_t,std::string>::const_iterator it = pixelLabelNames.find(label_idx);
375  if (it==pixelLabelNames.end()) throw std::runtime_error("Error: label index has no defined name");
376  return it->second;
377  }
378  void setLabelName(unsigned int label_idx, const std::string &name) { pixelLabelNames[label_idx]=name; }
379  /** Check the existence of a label by returning its associated index.
380  * -1 if it does not exist. */
383  for ( it = pixelLabelNames.begin() ; it != pixelLabelNames.end(); it++ )
384  if ( it->second == name )
385  return it->first;
386  return -1;
387  }
388 
389  /** Resizes the matrix pixelLabels to the given size, setting all bitfields to zero (that is, all pixels are assigned NONE category). */
390  virtual void setSize(const int NROWS, const int NCOLS) =0;
391  /** Mark the pixel(row,col) as classified in the category \a label_idx, which may be in the range 0 to MAX_NUM_LABELS-1
392  * Note that 0 is a valid label index, it does not mean "no label" \sa unsetLabel, unsetAll */
393  virtual void setLabel(const int row, const int col, uint8_t label_idx) =0;
394  virtual void getLabels( const int row, const int col, uint8_t &labels ) =0;
395  /** For the pixel(row,col), removes its classification into the category \a label_idx, which may be in the range 0 to 7
396  * Note that 0 is a valid label index, it does not mean "no label" \sa setLabel, unsetAll */
397  virtual void unsetLabel(const int row, const int col, uint8_t label_idx)=0;
398  /** Removes all categories for pixel(row,col) \sa setLabel, unsetLabel */
399  virtual void unsetAll(const int row, const int col, uint8_t label_idx) =0;
400  /** Checks whether pixel(row,col) has been clasified into category \a label_idx, which may be in the range 0 to 7
401  * \sa unsetLabel, unsetAll */
402  virtual bool checkLabel(const int row, const int col, uint8_t label_idx) const =0;
403 
404  void writeToStream(mrpt::utils::CStream &out) const;
405  static TPixelLabelInfoBase* readAndBuildFromStream(mrpt::utils::CStream &in);
406 
407  /// std stream interface
408  friend std::ostream& operator<<( std::ostream& out, const TPixelLabelInfoBase& obj ){
409  obj.Print( out );
410  return out;
411  }
412 
413  TPixelLabelInfoBase(unsigned int BITFIELD_BYTES_) :
414  BITFIELD_BYTES (BITFIELD_BYTES_)
415  {
416  }
417 
418  virtual ~TPixelLabelInfoBase() {}
419 
420  const uint8_t BITFIELD_BYTES; //!< Minimum number of bytes required to hold MAX_NUM_DIFFERENT_LABELS bits.
421 
422  protected:
423  virtual void internal_readFromStream(mrpt::utils::CStream &in) = 0;
424  virtual void internal_writeToStream(mrpt::utils::CStream &out) const = 0;
425  virtual void Print( std::ostream& ) const =0;
426  };
427  typedef std::shared_ptr<TPixelLabelInfoBase> TPixelLabelInfoPtr; //!< Used in CObservation3DRangeScan::pixelLabels
428 
429  template <unsigned int BYTES_REQUIRED_>
431  {
432  enum {
433  BYTES_REQUIRED = BYTES_REQUIRED_ // ((MAX_LABELS-1)/8)+1
434  };
435 
436  /** Automatically-determined integer type of the proper size such that all labels fit as one bit (max: 64) */
438 
439  /** Each pixel may be assigned between 0 and MAX_NUM_LABELS-1 'labels' by
440  * setting to 1 the corresponding i'th bit [0,MAX_NUM_LABELS-1] in the byte in pixelLabels(r,c).
441  * That is, each pixel is assigned an 8*BITFIELD_BYTES bit-wide bitfield of possible categories.
442  * \sa hasPixelLabels
443  */
444  typedef Eigen::Matrix<bitmask_t,Eigen::Dynamic,Eigen::Dynamic> TPixelLabelMatrix;
446 
447  void setSize(const int NROWS, const int NCOLS) MRPT_OVERRIDE {
448  pixelLabels = TPixelLabelMatrix::Zero(NROWS,NCOLS);
449  }
450  void setLabel(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE {
451  pixelLabels(row,col) |= static_cast<bitmask_t>(1) << label_idx;
452  }
453  void getLabels( const int row, const int col, uint8_t &labels ) MRPT_OVERRIDE
454  {
455  labels = pixelLabels(row,col);
456  }
457 
458  void unsetLabel(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE {
459  pixelLabels(row,col) &= ~(static_cast<bitmask_t>(1) << label_idx);
460  }
461  void unsetAll(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE {
462  pixelLabels(row,col) = 0;
463  }
464  bool checkLabel(const int row, const int col, uint8_t label_idx) const MRPT_OVERRIDE {
465  return (pixelLabels(row,col) & (static_cast<bitmask_t>(1) << label_idx)) != 0;
466  }
467 
468  // Ctor: pass identification to parent for deserialization
470  {
471  }
472 
473  protected:
475  {
476  {
477  uint32_t nR,nC;
478  in >> nR >> nC;
479  pixelLabels.resize(nR,nC);
480  for (uint32_t c=0;c<nC;c++)
481  for (uint32_t r=0;r<nR;r++)
482  in >> pixelLabels.coeffRef(r,c);
483  }
484  in >> pixelLabelNames;
485  }
487  {
488  {
489  const uint32_t nR=static_cast<uint32_t>(pixelLabels.rows());
490  const uint32_t nC=static_cast<uint32_t>(pixelLabels.cols());
491  out << nR << nC;
492  for (uint32_t c=0;c<nC;c++)
493  for (uint32_t r=0;r<nR;r++)
494  out << pixelLabels.coeff(r,c);
495  }
496  out << pixelLabelNames;
497  }
498  void Print( std::ostream& out ) const MRPT_OVERRIDE
499  {
500  {
501  const uint32_t nR=static_cast<uint32_t>(pixelLabels.rows());
502  const uint32_t nC=static_cast<uint32_t>(pixelLabels.cols());
503  out << "Number of rows: " << nR << std::endl;
504  out << "Number of cols: " << nC << std::endl;
505  out << "Matrix of labels: " << std::endl;
506  for (uint32_t c=0;c<nC;c++)
507  {
508  for (uint32_t r=0;r<nR;r++)
509  out << pixelLabels.coeff(r,c) << " ";
510 
511  out << std::endl;
512  }
513  }
514  out << std::endl;
515  out << "Label indices and names: " << std::endl;
517  for ( it = pixelLabelNames.begin(); it != pixelLabelNames.end(); it++)
518  out << it->first << " " << it->second << std::endl;
519  }
520  }; // end TPixelLabelInfo
521 
522  /** All information about pixel labeling is stored in this (smart pointer to) structure; refer to TPixelLabelInfo for details on the contents
523  * User is responsible of creating a new object of the desired data type. It will be automatically (de)serialized no matter its specific type. */
525 
526  /** @} */
527 
528  /** \name Sensor parameters
529  * @{ */
530  mrpt::utils::TCamera cameraParams; //!< Projection parameters of the depth camera.
531  mrpt::utils::TCamera cameraParamsIntensity; //!< Projection parameters of the intensity (graylevel or RGB) camera.
532 
533  /** Relative pose of the intensity camera wrt the depth camera (which is the coordinates origin for this observation).
534  * In a SwissRanger camera, this will be (0,0,0,-90deg,0,-90deg) since both cameras coincide.
535  * In a Kinect, this will include a small lateral displacement and a rotation, according to the drawing on the top of this page.
536  * \sa doDepthAndIntensityCamerasCoincide
537  */
539 
540  /** Return true if \a relativePoseIntensityWRTDepth equals the pure rotation (0,0,0,-90deg,0,-90deg) (with a small comparison epsilon)
541  * \sa relativePoseIntensityWRTDepth
542  */
543  bool doDepthAndIntensityCamerasCoincide() const;
544 
545  float maxRange; //!< The maximum range allowed by the device, in meters (e.g. 8.0m, 5.0m,...)
546  mrpt::poses::CPose3D sensorPose; //!< The 6D pose of the sensor on the robot.
547  float stdError; //!< The "sigma" error of the device in meters, used while inserting the scan in an occupancy grid.
548 
549  // See base class docs
550  void getSensorPose( mrpt::poses::CPose3D &out_sensorPose ) const MRPT_OVERRIDE { out_sensorPose = sensorPose; }
551  // See base class docs
552  void setSensorPose( const mrpt::poses::CPose3D &newSensorPose ) MRPT_OVERRIDE { sensorPose = newSensorPose; }
553 
554  /** @} */ // end sensor params
555 
556  // See base class docs
557  void getDescriptionAsText(std::ostream &o) const MRPT_OVERRIDE;
558 
559  void swap(CObservation3DRangeScan &o); //!< Very efficient method to swap the contents of two observations.
560  /** Extract a ROI of the 3D observation as a new one. \note PixelLabels are *not* copied to the output subimage. */
561  void getZoneAsObs( CObservation3DRangeScan &obs, const unsigned int &r1, const unsigned int &r2, const unsigned int &c1, const unsigned int &c2 );
562 
563  /** A Levenberg-Marquart-based optimizer to recover the calibration parameters of a 3D camera given a range (depth) image and the corresponding 3D point cloud.
564  * \param camera_offset The offset (in meters) in the +X direction of the point cloud. It's 1cm for SwissRanger SR4000.
565  * \return The final average reprojection error per pixel (typ <0.05 px)
566  */
567  static double recoverCameraCalibrationParameters(
568  const CObservation3DRangeScan &in_obs,
569  mrpt::utils::TCamera &out_camParams,
570  const double camera_offset = 0.01 );
571 
572  /** Look-up-table struct for project3DPointsFromDepthImageInto() */
574  {
577  };
578  static TCached3DProjTables m_3dproj_lut; //!< 3D point cloud projection look-up-table \sa project3DPointsFromDepthImage
579 
580  }; // End of class def.
582 
583 
584  } // End of namespace
585 
586  namespace utils
587  {
588  // Specialization must occur in the same namespace
589  MRPT_DECLARE_TTYPENAME_PTR_NAMESPACE(CObservation3DRangeScan, mrpt::obs)
590 
591  // Enum <-> string converter:
592  template <>
594  {
596  static void fill(bimap<enum_t,std::string> &m_map)
597  {
600  }
601  };
602  }
603 
604  namespace utils
605  {
606  /** Specialization mrpt::utils::PointCloudAdapter<CObservation3DRangeScan> \ingroup mrpt_adapters_grp */
607  template <>
608  class PointCloudAdapter< mrpt::obs::CObservation3DRangeScan> : public detail::PointCloudAdapterHelperNoRGB<mrpt::obs::CObservation3DRangeScan,float>
609  {
610  private:
612  public:
613  typedef float coords_t; //!< The type of each point XYZ coordinates
614  static const int HAS_RGB = 0; //!< Has any color RGB info?
615  static const int HAS_RGBf = 0; //!< Has native RGB info (as floats)?
616  static const int HAS_RGBu8 = 0; //!< Has native RGB info (as uint8_t)?
617 
618  /** Constructor (accept a const ref for convenience) */
619  inline PointCloudAdapter(const mrpt::obs::CObservation3DRangeScan &obj) : m_obj(*const_cast<mrpt::obs::CObservation3DRangeScan*>(&obj)) { }
620  /** Get number of points */
621  inline size_t size() const { return m_obj.points3D_x.size(); }
622  /** Set number of points (to uninitialized values) */
623  inline void resize(const size_t N) {
624  if (N) m_obj.hasPoints3D = true;
625  m_obj.resizePoints3DVectors(N);
626  }
627 
628  /** Get XYZ coordinates of i'th point */
629  template <typename T>
630  inline void getPointXYZ(const size_t idx, T &x,T &y, T &z) const {
631  x=m_obj.points3D_x[idx];
632  y=m_obj.points3D_y[idx];
633  z=m_obj.points3D_z[idx];
634  }
635  /** Set XYZ coordinates of i'th point */
636  inline void setPointXYZ(const size_t idx, const coords_t x,const coords_t y, const coords_t z) {
637  m_obj.points3D_x[idx]=x;
638  m_obj.points3D_y[idx]=y;
639  m_obj.points3D_z[idx]=z;
640  }
641  /** Set XYZ coordinates of i'th point */
642  inline void setInvalidPoint(const size_t idx)
643  {
644  THROW_EXCEPTION("mrpt::obs::CObservation3DRangeScan requires needs to be dense");
645  }
646 
647  }; // end of PointCloudAdapter<CObservation3DRangeScan>
648  }
649 } // End of namespace
650 
652 
653 #endif
bool m_points3D_external_stored
If set to true, m_points3D_external_file is valid.
std::string m_rangeImage_external_file
rangeImage is in CImage::IMAGES_PATH_BASE+<this_file_name>
std::shared_ptr< TPixelLabelInfoBase > TPixelLabelInfoPtr
Used in CObservation3DRangeScan::pixelLabels.
void setInvalidPoint(const size_t idx)
Set XYZ coordinates of i&#39;th point.
const std::string & getLabelName(unsigned int label_idx) const
GLdouble GLdouble z
Definition: glext.h:3734
unsigned __int16 uint16_t
Definition: rptypes.h:46
TIntensityChannelID
Enum type for intensityImageChannel.
uint8_t decimation
(Default:1) If !=1, split the range image in blocks of DxD (D=decimation), and only generates one poi...
void resizePoints3DVectors(const size_t nPoints)
Use this method instead of resizing all three points3D_x, points3D_y & points3D_z to allow the usage ...
const mrpt::poses::CPose3D * robotPoseInTheWorld
(Default: NULL) Read takeIntoAccountSensorPoseOnRobot
void resize(const size_t N)
Set number of points (to uninitialized values)
void setPointXYZ(const size_t idx, const coords_t x, const coords_t y, const coords_t z)
Set XYZ coordinates of i&#39;th point.
#define MRPT_OVERRIDE
C++11 "override" for virtuals:
mrpt::utils::TCamera cameraParamsIntensity
Projection parameters of the intensity (graylevel or RGB) camera.
void getSensorPose(mrpt::poses::CPose3D &out_sensorPose) const MRPT_OVERRIDE
A general method to retrieve the sensor pose on the robot.
A helper base class for those PointCloudAdapter<> which do not handle RGB data; it declares needed in...
Definition: adapters.h:48
void setLabelName(unsigned int label_idx, const std::string &name)
A class for storing images as grayscale or RGB bitmaps.
Definition: CImage.h:101
TPixelLabelInfoPtr pixelLabels
All information about pixel labeling is stored in this (smart pointer to) structure; refer to TPixelL...
#define THROW_EXCEPTION(msg)
std::string sensorLabel
The sensor label that will have the newly created observation.
void project3DPointsFromDepthImage(const bool PROJ3D_USE_LUT=true)
This method is equivalent to project3DPointsFromDepthImageInto() storing the projected 3D points (wit...
void unsetLabel(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE
For the pixel(row,col), removes its classification into the category label_idx, which may be in the r...
double oversampling_ratio
(Default=1.2=120%) How many more laser scans rays to create (read docs for CObservation3DRangeScan::c...
bool m_rangeImage_external_stored
If set to true, m_rangeImage_external_file is valid.
Declares a class derived from "CObservation" that encapsules a 3D range scan measurement, as from a time-of-flight range camera or any other RGBD sensor.
const mrpt::math::CMatrix * rangeMask_min
(Default: NULL) If provided, each data range will be tested to be greater-than (rangeMask_min) or les...
Look-up-table struct for project3DPointsFromDepthImageInto()
Column vector, like Eigen::MatrixX*, but automatically initialized to zeros since construction...
Definition: eigen_frwds.h:35
Usage: uint_select_by_bytecount<N>type var; allows defining var as a unsigned integer with...
PointCloudAdapter(const mrpt::obs::CObservation3DRangeScan &obj)
Constructor (accept a const ref for convenience)
STL namespace.
Only specializations of this class are defined for each enum type of interest.
Definition: TEnumType.h:23
friend std::ostream & operator<<(std::ostream &out, const TPixelLabelInfoBase &obj)
std stream interface
const Scalar * const_iterator
Definition: eigen_plugins.h:24
GLsizei GLsizei GLuint * obj
Definition: glext.h:3902
Used in CObservation3DRangeScan::project3DPointsFromDepthImageInto()
std::vector< float > points3D_z
If hasPoints3D=true, the (X,Y,Z) coordinates of the 3D point cloud detected by the camera...
mrpt::math::CMatrix rangeImage
If hasRangeImage=true, a matrix of floats with the range data as captured by the camera (in meters) ...
unsigned char uint8_t
Definition: rptypes.h:43
mrpt::utils::TCamera cameraParams
Projection parameters of the depth camera.
const uint8_t BITFIELD_BYTES
Minimum number of bytes required to hold MAX_NUM_DIFFERENT_LABELS bits.
void internal_readFromStream(mrpt::utils::CStream &in) MRPT_OVERRIDE
This base class is used to provide a unified interface to files,memory buffers,..Please see the deriv...
Definition: CStream.h:38
std::string rangeImage_getExternalStorageFileAbsolutePath() const
void setLabel(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE
Mark the pixel(row,col) as classified in the category label_idx, which may be in the range 0 to MAX_N...
#define DEFINE_SERIALIZABLE_PRE_CUSTOM_BASE_LINKAGE(class_name, base_name, _LINKAGE_)
This declaration must be inserted in all CSerializable classes definition, before the class declarati...
mrpt::poses::CPose3D relativePoseIntensityWRTDepth
Relative pose of the intensity camera wrt the depth camera (which is the coordinates origin for this ...
TMapLabelID2Name pixelLabelNames
The &#39;semantic&#39; or human-friendly name of the i&#39;th bit in pixelLabels(r,c) can be found in pixelLabelN...
const GLubyte * c
Definition: glext.h:5590
void project3DPointsFromDepthImageInto(mrpt::obs::CObservation3DRangeScan &src_obs, POINTMAP &dest_pointcloud, const mrpt::obs::T3DPointsProjectionParams &projectParams, const mrpt::obs::TRangeImageFilterParams &filterParams)
Used in CObservation3DRangeScan::convertTo2DScan()
int checkLabelNameExistence(const std::string &name) const
Check the existence of a label by returning its associated index.
A bidirectional version of std::map, declared as bimap<KEY,VALUE> and which actually contains two std...
Definition: bimap.h:28
std::string m_points3D_external_file
3D points are in CImage::IMAGES_PATH_BASE+<this_file_name>
This namespace contains representation of robot actions and observations.
EIGEN_STRONG_INLINE void setSize(size_t row, size_t col)
Changes the size of matrix, maintaining its previous content as possible and padding with zeros where...
bool hasRangeImage
true means the field rangeImage contains valid data
TIntensityChannelID intensityImageChannel
The source of the intensityImage; typically the visible channel.
void rangeImage_forceResetExternalStorage()
Forces marking this observation as non-externally stored - it doesn&#39;t anything else apart from reseti...
#define DEG2RAD
std::string points3D_getExternalStorageFileAbsolutePath() const
bool hasPoints3D
true means the field points3D contains valid data.
GLsizei const GLchar ** string
Definition: glext.h:3919
float stdError
The "sigma" error of the device in meters, used while inserting the scan in an occupancy grid...
static TCached3DProjTables m_3dproj_lut
3D point cloud projection look-up-table
void internal_writeToStream(mrpt::utils::CStream &out) const MRPT_OVERRIDE
Eigen::Matrix< bitmask_t, Eigen::Dynamic, Eigen::Dynamic > TPixelLabelMatrix
Each pixel may be assigned between 0 and MAX_NUM_LABELS-1 &#39;labels&#39; by setting to 1 the corresponding ...
#define MRPT_DECLARE_TTYPENAME_PTR_NAMESPACE(_TYPE, __NS)
Definition: TTypeName.h:72
bool USE_SSE2
(Default:true) If possible, use SSE2 optimized code.
void Print(std::ostream &out) const MRPT_OVERRIDE
This is the global namespace for all Mobile Robot Programming Toolkit (MRPT) libraries.
#define DEFINE_SERIALIZABLE(class_name)
This declaration must be inserted in all CSerializable classes definition, within the class declarati...
A "CObservation"-derived class that represents a 2D range scan measurement (typically from a laser sc...
GLdouble GLdouble GLdouble r
Definition: glext.h:3618
mrpt::poses::CPose3D sensorPose
The 6D pose of the sensor on the robot.
A class used to store a 3D pose (a 3D translation + a rotation in 3D).
Definition: CPose3D.h:72
Declares a class that represents any robot&#39;s observation.
bool checkLabel(const int row, const int col, uint8_t label_idx) const MRPT_OVERRIDE
Checks whether pixel(row,col) has been clasified into category label_idx, which may be in the range 0...
GLenum GLenum GLvoid * row
Definition: glext.h:3533
Used in CObservation3DRangeScan::project3DPointsFromDepthImageInto()
bool hasIntensityImage
true means the field intensityImage contains valid data
#define MRPT_DEPRECATED(msg)
Usage: MRPT_DEPRECATED("Use XX instead") void myFunc(double);.
GLuint in
Definition: glext.h:6301
GLuint const GLchar * name
Definition: glext.h:3891
bool hasPixelLabels() const
Returns true if the field CObservation3DRangeScan::pixelLabels contains a non-NULL smart pointer...
GLenum GLint GLint y
Definition: glext.h:3516
mrpt::utils::uint_select_by_bytecount< BYTES_REQUIRED >::type bitmask_t
Automatically-determined integer type of the proper size such that all labels fit as one bit (max: 64...
bool PROJ3D_USE_LUT
(Default:true) [Only used when range_is_depth=true] Whether to use a Look-up-table (LUT) to speed up ...
typedef void(APIENTRYP PFNGLBLENDCOLORPROC)(GLclampf red
mrpt::utils::CImage confidenceImage
If hasConfidenceImage=true, an image with the "confidence" value [range 0-255] as estimated by the ca...
An adapter to different kinds of point cloud object.
Definition: adapters.h:38
void insert(const KEY &k, const VALUE &v)
Insert a new pair KEY<->VALUE in the bi-map.
Definition: bimap.h:69
void getPointXYZ(const size_t idx, T &x, T &y, T &z) const
Get XYZ coordinates of i&#39;th point.
GLenum GLint x
Definition: glext.h:3516
bool MAKE_DENSE
(Default:true) set to false if you want to preserve the organization of the point cloud ...
mrpt::utils::CImage intensityImage
If hasIntensityImage=true, a color or gray-level intensity image of the same size than "rangeImage"...
This class is a "CSerializable" wrapper for "CMatrixFloat".
Definition: CMatrix.h:30
unsigned __int32 uint32_t
Definition: rptypes.h:49
bool range_is_depth
true: Kinect-like ranges: entries of rangeImage are distances along the +X axis; false: Ranges in ran...
#define DEFINE_SERIALIZABLE_POST_CUSTOM_BASE_LINKAGE(class_name, base_name, _LINKAGE_)
void getLabels(const int row, const int col, uint8_t &labels) MRPT_OVERRIDE
void unsetAll(const int row, const int col, uint8_t label_idx) MRPT_OVERRIDE
Removes all categories for pixel(row,col)
GLfloat GLfloat p
Definition: glext.h:5587
Virtual interface to all pixel-label information structs.
void setSize(const int NROWS, const int NCOLS) MRPT_OVERRIDE
Resizes the matrix pixelLabels to the given size, setting all bitfields to zero (that is...
void setSensorPose(const mrpt::poses::CPose3D &newSensorPose) MRPT_OVERRIDE
A general method to change the sensor pose on the robot.
Grayscale or RGB visible channel of the camera sensor.
bool takeIntoAccountSensorPoseOnRobot
(Default: false) If false, local (sensor-centric) coordinates of points are generated. Otherwise, points are transformed with sensorPose. Furthermore, if provided, those coordinates are transformed with robotPoseInTheWorld
Structure to hold the parameters of a pinhole camera model.
Definition: TCamera.h:31
bool use_origin_sensor_pose
(Default:false) If false, the conversion will be such that the 2D observation pose on the robot coinc...
float maxRange
The maximum range allowed by the device, in meters (e.g. 8.0m, 5.0m,...)
bool hasConfidenceImage
true means the field confidenceImage contains valid data



Page generated by Doxygen 1.8.14 for MRPT 1.5.7 Git: 5902e14cc Wed Apr 24 15:04:01 2019 +0200 at lun oct 28 01:39:17 CET 2019