class mrpt::hwdrivers::COpenNI2Sensor

Overview

A class for grabbing “range images”, intensity images (either RGB or IR) and other information from an OpenNI2 sensor.

This class permits one to access several sensors simultaneously. The same options (resolution, fps, etc.) are used for every sensor.

Configuration and usage:

Data is returned as observations of type mrpt::obs::CObservation3DRangeScan. See those classes for documentation on their fields.

As with any other CGenericSensor class, the normal sequence of methods to be called is:

Calibration parameters

In this class we employ the OpenNI2 method to return depth images referred to the RGB camera. Otherwise we could specify an accurate transformation of depth images to 3D points, you’ll have to calibrate your RGBD sensor for that, and supply the following threee pieces of information (default calibration data will be used otherwise, but they’ll be not optimal for all sensors!):

See https://www.mrpt.org/Kinect_calibration for a procedure to calibrate RGBD sensors with an interactive GUI program.

Coordinates convention

The origin of coordinates is the focal point of the depth camera, with the axes oriented as in the diagram shown in mrpt::obs::CObservation3DRangeScan. Notice in that picture that the RGB camera is assumed to have axes as usual in computer vision, which differ from those for the depth camera.

The X,Y,Z axes used to report the data from accelerometers coincide with those of the depth camera (e.g. the camera standing on a table would have an ACC_Z=-9.8m/s2).

Notice however that, for consistency with stereo cameras, when loading the calibration parameters from a configuration file, the left-to-right pose increment is expected as if both RGB & IR cameras had their +Z axes pointing forward, +X to the right, +Y downwards (just like it’s the standard in stereo cameras and in computer vision literature). In other words: the pose stored in this class uses a different axes convention for the depth camera than in a stereo camera, so when a pose L2R is loaded from a calibration file it’s actually converted like:

L2R(this class convention) = CPose3D(0,0,0,-90deg,0deg,-90deg) (+) L2R(in the config file)

Some general comments

  • Depth is grabbed in millimeters

  • This sensor can be also used from within rawlog-grabber to grab datasets within a robot with more sensors.

  • There is no built-in threading support, so if you use this class manually (not with-in rawlog-grabber), the ideal would be to create a thread and continuously request data from that thread (see mrpt::system::createThread ).

  • The intensity channel default to the RGB images, but it can be changed with setVideoChannel() to read the IR camera images (useful for calibrating).

  • There is a built-in support for an optional preview of the data on a window, so you don’t need to even worry on creating a window to show them.

  • This class relies on an embedded version of libfreenect (you do NOT need to install it in your system). Thanks guys for the great job!

Converting to 3D point cloud

You can convert the 3D observation into a 3D point cloud with this piece of code:

mrpt::obs::CObservation3DRangeScan  obs3D;
mrpt::maps::CColouredPointsMap       pntsMap;
pntsMap.colorScheme.scheme = CColouredPointsMap::cmFromIntensityImage;
pntsMap.loadFromRangeScan(obs3D);

Then the point cloud mrpt::maps::CColouredPointsMap can be converted into an OpenGL object for rendering with mrpt::maps::CMetricMap::getAs3DObject() or alternatively with:

     mrpt::opengl::CPointCloudColoured::Ptr gl_points =
*mrpt::opengl::CPointCloudColoured::Create();
     gl_points->loadFromPointsMap(&pntsMap);

Platform-specific comments

For more details, refer to libfreenect documentation:

  • Linux: You’ll need root privileges to access Kinect. Or, install MRPT/scripts/51-kinect.rules in /etc/udev/rules.d/ to allow access to all users.

  • Windows:

  • MacOS: (write me!)

Format of parameters for loading from a .ini file

   PARAMETERS IN THE ".INI"-LIKE CONFIGURATION STRINGS:
  -------------------------------------------------------
    [supplied_section_name]
     sensorLabel     = OPENNI2       // A text description
     preview_window  = false        // Show a window with a preview of the
*grabbed data in real-time

     device_number   = 0           // Device index to open (0:first Kinect,
 1:second Kinect,...)

     grab_image      = true        // Grab the RGB image channel?
*(Default=true)
     grab_depth      = true        // Grab the depth channel? (Default=true)
     grab_3D_points  = true        // Grab the 3D point cloud? (Default=true)
*If disabled, points can be generated later on.

     video_channel   = VIDEO_CHANNEL_RGB // Optional. Can be:
*VIDEO_CHANNEL_RGB (default) or VIDEO_CHANNEL_IR

     pose_x=0   // Camera position in the robot (meters)
     pose_y=0
     pose_z=0
     pose_yaw=0 // Angles in degrees
     pose_pitch=0
     pose_roll=0


     // Kinect sensor calibration:
     // See https://www.mrpt.org/Kinect_and_MRPT

     // Left/Depth camera
     [supplied_section_name_LEFT]
     rawlog-grabber-ignore = true // Instructs rawlog-grabber to ignore this
*section (it is not a separate device!)

     resolution = [640 488]
     cx         = 314.649173
     cy         = 240.160459
     fx         = 572.882768
     fy         = 542.739980
     dist       = [-4.747169e-03 -4.357976e-03 0.000000e+00 0.000000e+00
 0.000000e+00]    // The order is: [K1 K2 T1 T2 K3]

     // Right/RGB camera
     [supplied_section_name_RIGHT]
     rawlog-grabber-ignore = true // Instructs rawlog-grabber to ignore this
*section (it is not a separate device!)

     resolution = [640 480]
     cx         = 322.515987
     cy         = 259.055966
     fx         = 521.179233
     fy         = 493.033034
     dist       = [5.858325e-02 3.856792e-02 0.000000e+00 0.000000e+00
 0.000000e+00]    // The order is: [K1 K2 T1 T2 K3]

     // Relative pose of the right camera wrt to the left camera:
     // This assumes that both camera frames are such that +Z points
     // forwards, and +X and +Y to the right and downwards.
     // For the actual coordinates employed in 3D observations, see figure in
*mrpt::obs::CObservation3DRangeScan
     [supplied_section_name_LEFT2RIGHT_POSE]
     rawlog-grabber-ignore = true // Instructs rawlog-grabber to ignore this
*section (it is not a separate device!)

     pose_quaternion      = [0.025575 -0.000609 -0.001462 0.999987 0.002038
 0.004335 -0.001693]

More references to read:IMPEXP mrpt

#include <mrpt/hwdrivers/COpenNI2Sensor.h>

class COpenNI2Sensor:
    public mrpt::hwdrivers::CGenericSensor,
    public mrpt::hwdrivers::COpenNI2Generic
{
public:
    // construction

    COpenNI2Sensor();

    // methods

    double getMaxRange() const;
    size_t rows() const;
    size_t cols() const;
    const mrpt::img::TCamera& getCameraParamsIntensity() const;
    void setCameraParamsIntensity(const mrpt::img::TCamera& p);
    const mrpt::img::TCamera& getCameraParamsDepth() const;
    void setCameraParamsDepth(const mrpt::img::TCamera& p);
    void setRelativePoseIntensityWrtDepth(const mrpt::poses::CPose3D& p);
    const mrpt::poses::CPose3D& getRelativePoseIntensityWrtDepth() const;
    void enableGrabRGB(bool enable = true);
    bool isGrabRGBEnabled() const;
    void enableGrabDepth(bool enable = true);
    bool isGrabDepthEnabled() const;
    void enableGrab3DPoints(bool enable = true);
    bool isGrab3DPointsEnabled() const;
    void setSerialToOpen(const unsigned serial);
    void setSensorIDToOpen(const unsigned sensor_id);
    virtual void initialize();
    virtual void doProcess();
    void getNextObservation(mrpt::obs::CObservation3DRangeScan& out_obs, bool& there_is_obs, bool& hardware_error);
    void setPathForExternalImages(const std::string& directory);
    void setVerbose(bool verbose);
    bool isVerbose() const;

    bool getColorSensorParam(
        mrpt::img::TCamera& param,
        unsigned sensor_id = 0
        ) const;

    bool getDepthSensorParam(
        mrpt::img::TCamera& param,
        unsigned sensor_id = 0
        ) const;
};

Inherited Members

public:
    // classes

    class CDevice;

    // methods

    CGenericSensor& operator = (const CGenericSensor&);
    virtual void doProcess() = 0;

Construction

COpenNI2Sensor()

Default ctor.

Methods

double getMaxRange() const

Get the maximum range (meters) that can be read in the observation field “rangeImage”.

size_t rows() const

Get the row count in the camera images, loaded automatically upon camera open().

size_t cols() const

Get the col count in the camera images, loaded automatically upon camera open().

const mrpt::img::TCamera& getCameraParamsIntensity() const

Get a const reference to the depth camera calibration parameters.

const mrpt::img::TCamera& getCameraParamsDepth() const

Get a const reference to the depth camera calibration parameters.

void setRelativePoseIntensityWrtDepth(const mrpt::poses::CPose3D& p)

Set the pose of the intensity camera wrt the depth camera.

See also:

See mrpt::obs::CObservation3DRangeScan for a 3D diagram of this pose

void enableGrabRGB(bool enable = true)

Enable/disable the grabbing of the RGB channel.

void enableGrabDepth(bool enable = true)

Enable/disable the grabbing of the depth channel.

void enableGrab3DPoints(bool enable = true)

Enable/disable the grabbing of the 3D point clouds.

void setSerialToOpen(const unsigned serial)

Set the serial number of the device to open.

Parameters:

This

method must throw an exception when such serial number is not found among the connected devices.

void setSensorIDToOpen(const unsigned sensor_id)

Set the sensor_id of the device to open.

Parameters:

This

method must throw an exception when such serial number is not found among the connected devices.

virtual void initialize()

Initializes the 3D camera - should be invoked after calling loadConfig() or setting the different parameters with the set*() methods.

This method can or cannot be implemented in the derived class, depending on the need for it.

Parameters:

This

method must throw an exception with a descriptive message if some critical error is found.

virtual void doProcess()

To be called at a high rate (>XX Hz), this method populates the internal buffer of received observations.

This method will be invoked at a minimum rate of “process_rate” (Hz)

This method is mainly intended for usage within rawlog-grabber or similar programs. For an alternative, see getNextObservation()

Parameters:

This

method must throw an exception with a descriptive message if some critical error is found.

This

method must throw an exception with a descriptive message if some critical error is found.

See also:

getNextObservation

void getNextObservation(mrpt::obs::CObservation3DRangeScan& out_obs, bool& there_is_obs, bool& hardware_error)

The main data retrieving function, to be called after calling loadConfig() and initialize().

Parameters:

out_obs

The output retrieved observation (only if there_is_obs=true).

there_is_obs

If set to false, there was no new observation.

hardware_error

True on hardware/comms error.

See also:

doProcess

void setPathForExternalImages(const std::string& directory)

Set the path where to save off-rawlog image files (this class DOES take into account this path).

An empty string (the default value at construction) means to save images embedded in the rawlog, instead of on separate files.

Parameters:

std::exception

If the directory doesn’t exists and cannot be created.