class mrpt::hwdrivers::CKinect

A class for grabing “range images”, intensity images (either RGB or IR) and other information from an Xbox Kinect sensor.

To use Kinect for Windows or ASUS/Primesense RGBD cameras, use the class COpenNI2.

Configuration and usage:

Data is returned as observations of type mrpt::obs::CObservation3DRangeScan (and mrpt::obs::CObservationIMU for accelerometers data). See those classes for documentation on their fields.

As with any other CGenericSensor class, the normal sequence of methods to be called is:

Calibration parameters

For an accurate transformation of depth images to 3D points, you’ll have to calibrate your Kinect, and supply the following threee pieces of information (default calibration data will be used otherwise, but they’ll be not optimal for all sensors!):

See https://www.mrpt.org/Kinect_calibration for a procedure to calibrate Kinect sensors with an interactive GUI program.

Coordinates convention

The origin of coordinates is the focal point of the depth camera, with the axes oriented as in the diagram shown in mrpt::obs::CObservation3DRangeScan. Notice in that picture that the RGB camera is assumed to have axes as usual in computer vision, which differ from those for the depth camera.

The X,Y,Z axes used to report the data from accelerometers coincide with those of the depth camera (e.g. the camera standing on a table would have an ACC_Z=-9.8m/s2).

Notice however that, for consistency with stereo cameras, when loading the calibration parameters from a configuration file, the left-to-right pose increment is expected as if both RGB & IR cameras had their +Z axes pointing forward, +X to the right, +Y downwards (just like it’s the standard in stereo cameras and in computer vision literature). In other words: the pose stored in this class uses a different axes convention for the depth camera than in a stereo camera, so when a pose L2R is loaded from a calibration file it’s actually converted like:

L2R(this class convention) = CPose3D(0,0,0,-90deg,0deg,-90deg) (+) L2R(in the config file)

Some general comments

  • Depth is grabbed in 10bit depth, and a range N it’s converted to meters as: range(m) = 0.1236 * tan(N/2842.5 + 1.1863)

  • This sensor can be also used from within rawlog-grabber to grab datasets within a robot with more sensors.

  • There is no built-in threading support, so if you use this class manually (not with-in rawlog-grabber), the ideal would be to create a thread and continuously request data from that thread (see mrpt::system::createThread ).

  • The intensity channel default to the RGB images, but it can be changed with setVideoChannel() to read the IR camera images (useful for calibrating).

  • There is a built-in support for an optional preview of the data on a window, so you don’t need to even worry on creating a window to show them.

  • This class relies on an embedded version of libfreenect (you do NOT need to install it in your system). Thanks guys for the great job!

Converting to 3D point cloud

You can convert the 3D observation into a 3D point cloud with this piece of code:

mrpt::obs::CObservation3DRangeScan  obs3D;
mrpt::maps::CColouredPointsMap       pntsMap;
pntsMap.colorScheme.scheme = CColouredPointsMap::cmFromIntensityImage;
pntsMap.loadFromRangeScan(obs3D);

Then the point cloud mrpt::maps::CColouredPointsMap can be converted into an OpenGL object for rendering with mrpt::maps::CMetricMap::getAs3DObject() or alternatively with:

     mrpt::opengl::CPointCloudColoured::Ptr gl_points =
*mrpt::opengl::CPointCloudColoured::Create();
     gl_points->loadFromPointsMap(&pntsMap);

Raw depth to range conversion

At construction, this class builds an internal array for converting raw 10 or 11bit depths into ranges in meters. Users can read that array or modify it (if you have a better calibration, for example) by calling CKinect::getRawDepth2RangeConversion(). If you replace it, remember to set the first and last entries (index 0 and KINECT_RANGES_TABLE_LEN-1) to zero, to indicate that those are invalid ranges.

Platform-specific comments

For more details, refer to libfreenect documentation:

  • Linux: You’ll need root privileges to access Kinect. Or, install MRPT/scripts/51-kinect.rules in /etc/udev/rules.d/ to allow access to all users.

  • Windows:

  • MacOS: (write me!)

Format of parameters for loading from a .ini file

   PARAMETERS IN THE ".INI"-LIKE CONFIGURATION STRINGS:
  -------------------------------------------------------
    [supplied_section_name]
     sensorLabel     = KINECT       // A text description
     preview_window  = false        // Show a window with a preview of the
*grabbed data in real-time

     device_number   = 0           // Device index to open (0:first Kinect,
 1:second Kinect,...)

     grab_image      = true        // Grab the RGB image channel?
*(Default=true)
     grab_depth      = true        // Grab the depth channel? (Default=true)
     grab_3D_points  = true        // Grab the 3D point cloud? (Default=true)
*If disabled, points can be generated later on.
     grab_IMU        = true        // Grab the accelerometers? (Default=true)

     video_channel   = VIDEO_CHANNEL_RGB // Optional. Can be:
*VIDEO_CHANNEL_RGB (default) or VIDEO_CHANNEL_IR

     pose_x=0   // Camera position in the robot (meters)
     pose_y=0
     pose_z=0
     pose_yaw=0 // Angles in degrees
     pose_pitch=0
     pose_roll=0

     // Optional: Set the initial tilt angle of Kinect: upon initialization,
*the motor is sent a command to
     //            rotate to this angle (in degrees). Note: You must be aware
*of the tilt when interpreting the sensor readings.
     //           If not present or set to "360", no motor command will be
*sent at start up.
     initial_tilt_angle = 0

     // Kinect sensor calibration:
     // See https://www.mrpt.org/Kinect_and_MRPT

     // Left/Depth camera
     [supplied_section_name_LEFT]
     rawlog-grabber-ignore = true // Instructs rawlog-grabber to ignore this
*section (it is not a separate device!)

     resolution = [640 488]
     cx         = 314.649173
     cy         = 240.160459
     fx         = 572.882768
     fy         = 542.739980
     dist       = [-4.747169e-03 -4.357976e-03 0.000000e+00 0.000000e+00
 0.000000e+00]    // The order is: [K1 K2 T1 T2 K3]

     // Right/RGB camera
     [supplied_section_name_RIGHT]
     rawlog-grabber-ignore = true // Instructs rawlog-grabber to ignore this
*section (it is not a separate device!)

     resolution = [640 480]
     cx         = 322.515987
     cy         = 259.055966
     fx         = 521.179233
     fy         = 493.033034
     dist       = [5.858325e-02 3.856792e-02 0.000000e+00 0.000000e+00
 0.000000e+00]    // The order is: [K1 K2 T1 T2 K3]

     // Relative pose of the right camera wrt to the left camera:
     // This assumes that both camera frames are such that +Z points
     // forwards, and +X and +Y to the right and downwards.
     // For the actual coordinates employed in 3D observations, see figure in
*mrpt::obs::CObservation3DRangeScan
     [supplied_section_name_LEFT2RIGHT_POSE]
     rawlog-grabber-ignore = true // Instructs rawlog-grabber to ignore this
*section (it is not a separate device!)

     pose_quaternion      = [0.025575 -0.000609 -0.001462 0.999987 0.002038
 0.004335 -0.001693]

More references to read:

#include <mrpt/hwdrivers/CKinect.h>

class CKinect: public mrpt::hwdrivers::CGenericSensor
{
public:
    // typedefs

    typedef uint16_t[KINECT_RANGES_TABLE_LEN] TDepth2RangeArray;

    // enums

    enum TVideoChannel;

    // construction

    CKinect();

    //
methods

    void open();
    bool isOpen() const;
    void close();
    void setVideoChannel(const TVideoChannel vch);
    TVideoChannel getVideoChannel() const;
    void setDeviceIndexToOpen(int index);
    int getDeviceIndexToOpen() const;
    void setTiltAngleDegrees(double angle);
    double getTiltAngleDegrees();
    void enablePreviewRGB(bool enable = true);
    void disablePreviewRGB();
    bool isPreviewRGBEnabled() const;
    void setPreviewDecimation(size_t decimation_factor);
    size_t getPreviewDecimation() const;
    double getMaxRange() const;
    size_t rows() const;
    size_t cols() const;
    const mrpt::img::TCamera& getCameraParamsIntensity() const;
    void setCameraParamsIntensity(const mrpt::img::TCamera& p);
    const mrpt::img::TCamera& getCameraParamsDepth() const;
    void setCameraParamsDepth(const mrpt::img::TCamera& p);
    void setRelativePoseIntensityWrtDepth(const mrpt::poses::CPose3D& p);
    const mrpt::poses::CPose3D& getRelativePoseIntensityWrtDepth() const;
    TDepth2RangeArray& getRawDepth2RangeConversion();
    const TDepth2RangeArray& getRawDepth2RangeConversion() const;
    void enableGrabRGB(bool enable = true);
    bool isGrabRGBEnabled() const;
    void enableGrabDepth(bool enable = true);
    bool isGrabDepthEnabled() const;
    void enableGrabAccelerometers(bool enable = true);
    bool isGrabAccelerometersEnabled() const;
    void enableGrab3DPoints(bool enable = true);
    bool isGrab3DPointsEnabled() const;
    virtual void initialize();
    virtual void doProcess();
    void getNextObservation(mrpt::obs::CObservation3DRangeScan& out_obs, bool& there_is_obs, bool& hardware_error);
    void getNextObservation(mrpt::obs::CObservation3DRangeScan& out_obs, mrpt::obs::CObservationIMU& out_obs_imu, bool& there_is_obs, bool& hardware_error);
    void setPathForExternalImages(const std::string& directory);
};

Inherited Members

public:
    //
methods

    CGenericSensor& operator = (const CGenericSensor&);
    virtual void doProcess() = 0;

Typedefs

typedef uint16_t[KINECT_RANGES_TABLE_LEN] TDepth2RangeArray

A type for an array that converts raw depth to ranges in millimeters.

Construction

CKinect()

Default ctor.

Methods

void open()

Try to open the camera (set all the parameters before calling this) - users may also call initialize(), which in turn calls this method.

Raises an exception upon error.

Parameters:

std::exception

A textual description of the error.

bool isOpen() const

Whether there is a working connection to the sensor.

void close()

Close the Connection to the sensor (not need to call it manually unless desired for some reason, since it’s called at destructor)

void setVideoChannel(const TVideoChannel vch)

Changes the video channel to open (RGB or IR) - you can call this method before start grabbing or in the middle of streaming and the video source will change on the fly.

Default is RGB channel.

TVideoChannel getVideoChannel() const

Return the current video channel (RGB or IR)

See also:

setVideoChannel

void setDeviceIndexToOpen(int index)

Set the sensor index to open (if there’re several sensors attached to the computer); default=0 -> the first one.

void setTiltAngleDegrees(double angle)

Change tilt angle.

Sensor must be open first.

void enablePreviewRGB(bool enable = true)

Default: disabled.

void setPreviewDecimation(size_t decimation_factor)

If preview is enabled, show only one image out of N (default: 1=show all)

double getMaxRange() const

Get the maximum range (meters) that can be read in the observation field “rangeImage”.

size_t rows() const

Get the row count in the camera images, loaded automatically upon camera open().

size_t cols() const

Get the col count in the camera images, loaded automatically upon camera open().

const mrpt::img::TCamera& getCameraParamsIntensity() const

Get a const reference to the depth camera calibration parameters.

const mrpt::img::TCamera& getCameraParamsDepth() const

Get a const reference to the depth camera calibration parameters.

void setRelativePoseIntensityWrtDepth(const mrpt::poses::CPose3D& p)

Set the pose of the intensity camera wrt the depth camera.

See also:

See mrpt::obs::CObservation3DRangeScan for a 3D diagram of this pose

TDepth2RangeArray& getRawDepth2RangeConversion()

Get a reference to the array that convert raw depth values (10 or 11 bit) into ranges in mm, so it can be read or replaced by the user.

If you replace it, remember to set the first and last entries (index 0 and KINECT_RANGES_TABLE_LEN-1) to zero, to indicate that those are invalid ranges.

void enableGrabRGB(bool enable = true)

Enable/disable the grabbing of the RGB channel.

void enableGrabDepth(bool enable = true)

Enable/disable the grabbing of the depth channel.

void enableGrabAccelerometers(bool enable = true)

Enable/disable the grabbing of the inertial data.

void enableGrab3DPoints(bool enable = true)

Enable/disable the grabbing of the 3D point clouds.

virtual void initialize()

Initializes the 3D camera - should be invoked after calling loadConfig() or setting the different parameters with the set*() methods.

This method can or cannot be implemented in the derived class, depending on the need for it.

Parameters:

This

method must throw an exception with a descriptive message if some critical error is found.

virtual void doProcess()

To be called at a high rate (>XX Hz), this method populates the internal buffer of received observations.

This method will be invoked at a minimum rate of “process_rate” (Hz)

This method is mainly intended for usage within rawlog-grabber or similar programs. For an alternative, see getNextObservation()

Parameters:

This

method must throw an exception with a descriptive message if some critical error is found.

This

method must throw an exception with a descriptive message if some critical error is found.

See also:

getNextObservation

void getNextObservation(mrpt::obs::CObservation3DRangeScan& out_obs, bool& there_is_obs, bool& hardware_error)

The main data retrieving function, to be called after calling loadConfig() and initialize().

Parameters:

out_obs

The output retrieved observation (only if there_is_obs=true).

there_is_obs

If set to false, there was no new observation.

hardware_error

True on hardware/comms error.

See also:

doProcess

void getNextObservation(
    mrpt::obs::CObservation3DRangeScan& out_obs,
    mrpt::obs::CObservationIMU& out_obs_imu,
    bool& there_is_obs,
    bool& hardware_error
    )

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

This method also grabs data from the accelerometers, returning them in out_obs_imu

void setPathForExternalImages(const std::string& directory)

Set the path where to save off-rawlog image files (this class DOES take into account this path).

An empty string (the default value at construction) means to save images embedded in the rawlog, instead of on separate files.

Parameters:

std::exception

If the directory doesn’t exists and cannot be created.