class mrpt::hwdrivers::COpenNI2_RGBD360
Overview
A class for grabbing RGBD images from several OpenNI2 sensors.
This is used to obtain larger fields of view using a radial configuration of the sensors. The same options (resolution, fps, etc.) are used for every sensor.
Configuration and usage:
Data is returned as observations of type mrpt::obs::CObservationRGBD360. See those classes for documentation on their fields.
As with any other CGenericSensor class, the normal sequence of methods to be called is:
CGenericSensor::loadConfig() - Or calls to the individual setXXX() to configure the sensor parameters.
COpenNI2_RGBD360::initialize() - to start the communication with the sensor.
call COpenNI2_RGBD360::getNextObservation() for getting the data.
Calibration parameters
The reference system for both depth and RGB images provided by each individual OpenNI2 sensors are referred to the RGB Camera. The extrinsic parameters of each RGBD sensor are provided from a configuration file. This calibration was obtained using the method reported in [].
Coordinates convention
The origin of coordinates is the focal point of the RGB camera of the first indexed sensor, with the axes oriented as in the diagram shown in mrpt::obs::CObservation3DRangeScan. Notice in that picture that the RGB camera is assumed to have axes as usual in computer vision, which differ from those for the depth camera.
The X,Y,Z axes used to report the data from accelerometers coincide with those of the depth camera (e.g. the camera standing on a table would have an ACC_Z=-9.8m/s2).
Notice however that, for consistency with stereo cameras, when loading the calibration parameters from a configuration file, the left-to-right pose increment is expected as if both RGB & IR cameras had their +Z axes pointing forward, +X to the right, +Y downwards (just like it’s the standard in stereo cameras and in computer vision literature). In other words: the pose stored in this class uses a different axes convention for the depth camera than in a stereo camera, so when a pose L2R is loaded from a calibration file it’s actually converted like:
L2R(this class convention) = CPose3D(0,0,0,-90deg,0deg,-90deg) (+) L2R(in the config file)
Some general comments
Depth is grabbed in 10bit depth, and a range N it’s converted to meters as: range(m) = 0.1236 * tan(N/2842.5 + 1.1863)
This sensor can be also used from within rawlog-grabber to grab datasets within a robot with more sensors.
There is no built-in threading support, so if you use this class manually (not with-in rawlog-grabber), the ideal would be to create a thread and continuously request data from that thread (see mrpt::system::createThread ).
The intensity channel default to the RGB images, but it can be changed with setVideoChannel() to read the IR camera images (useful for calibrating).
There is a built-in support for an optional preview of the data on a window, so you don’t need to even worry on creating a window to show them.
This class relies on an embedded version of libfreenect (you do NOT need to install it in your system). Thanks guys for the great job!
Converting to 3D point cloud
You can convert the 3D observation into a 3D point cloud with this piece of code:
mrpt::obs::CObservationRGBD360 obs3D; mrpt::maps::CColouredPointsMap pntsMap; pntsMap.colorScheme.scheme = CColouredPointsMap::cmFromIntensityImage; pntsMap.loadFromRangeScan(obs3D);
Then the point cloud mrpt::maps::CColouredPointsMap can be converted into an OpenGL object for rendering with mrpt::maps::CMetricMap::getAs3DObject() or alternatively with:
mrpt::opengl::CPointCloudColoured::Ptr gl_points = *mrpt::opengl::CPointCloudColoured::Create(); gl_points->loadFromPointsMap(&pntsMap);
Platform-specific comments
For more details, refer to libfreenect documentation:
Linux: You’ll need root privileges to access Kinect. Or, install
MRPT/scripts/51-kinect.rules
in/etc/udev/rules.d/
to allow access to all users.Windows:
Since MRPT 0.9.4 you’ll only need to install libusb-win32 : download and extract the latest libusb-win32-bin-x.x.x.x.zip
To install the drivers, read this: http://openkinect.org/wiki/Getting_Started#Windows
MacOS: (write me!)
Format of parameters for loading from a .ini file
PARAMETERS IN THE ".INI"-LIKE CONFIGURATION STRINGS: ------------------------------------------------------- [supplied_section_name] sensorLabel = RGBD360 // A text description preview_window = false // Show a window with a preview of the *grabbed data in real-time device_number = 0 // Device index to open (0:first Kinect, 1:second Kinect,...) grab_image = true // Grab the RGB image channel? *(Default=true) grab_depth = true // Grab the depth channel? (Default=true) grab_3D_points = true // Grab the 3D point cloud? (Default=true) *If disabled, points can be generated later on. video_channel = VIDEO_CHANNEL_RGB // Optional. Can be: *VIDEO_CHANNEL_RGB (default) or VIDEO_CHANNEL_IR pose_x=0 // Camera position in the robot (meters) pose_y=0 pose_z=0 pose_yaw=0 // Angles in degrees pose_pitch=0 pose_roll=0 // Left/Depth camera [supplied_section_name_LEFT] rawlog-grabber-ignore = true // Instructs rawlog-grabber to ignore this *section (it is not a separate device!) resolution = [640 488] cx = 314.649173 cy = 240.160459 fx = 572.882768 fy = 542.739980 dist = [-4.747169e-03 -4.357976e-03 0.000000e+00 0.000000e+00 0.000000e+00] // The order is: [K1 K2 T1 T2 K3] // Right/RGB camera [supplied_section_name_RIGHT] rawlog-grabber-ignore = true // Instructs rawlog-grabber to ignore this *section (it is not a separate device!) resolution = [640 480] cx = 322.515987 cy = 259.055966 fx = 521.179233 fy = 493.033034 dist = [5.858325e-02 3.856792e-02 0.000000e+00 0.000000e+00 0.000000e+00] // The order is: [K1 K2 T1 T2 K3] // Relative pose of the right camera wrt to the left camera: // This assumes that both camera frames are such that +Z points // forwards, and +X and +Y to the right and downwards. // For the actual coordinates employed in 3D observations, see figure in *mrpt::obs::CObservation3DRangeScan [supplied_section_name_LEFT2RIGHT_POSE] rawlog-grabber-ignore = true // Instructs rawlog-grabber to ignore this *section (it is not a separate device!) pose_quaternion = [0.025575 -0.000609 -0.001462 0.999987 0.002038 0.004335 -0.001693]
More references to read:
#include <mrpt/hwdrivers/COpenNI2_RGBD360.h> class COpenNI2_RGBD360: public mrpt::hwdrivers::CGenericSensor, public mrpt::hwdrivers::COpenNI2Generic { public: // construction COpenNI2_RGBD360(); // methods double getMaxRange() const; void enableGrabRGB(bool enable = true); bool isGrabRGBEnabled() const; void enableGrabDepth(bool enable = true); bool isGrabDepthEnabled() const; void enableGrab3DPoints(bool enable = true); bool isGrab3DPointsEnabled() const; void open(unsigned sensor_id = 0); unsigned int openDevicesBySerialNum(const std::set<unsigned>& vSerialRequired); unsigned int openDeviceBySerial(const unsigned int SerialRequired); bool getDeviceIDFromSerialNum(const unsigned int SerialRequired, int& sensor_id) const; bool start(); void kill(); bool isOpen(const unsigned sensor_id) const; void close(unsigned sensor_id = 0); int getNumDevices() const; int getConnectedDevices(); virtual void initialize(); virtual void doProcess(); void getNextObservation(mrpt::obs::CObservationRGBD360& out_obs, bool& there_is_obs, bool& hardware_error); void setPathForExternalImages(const std::string& directory); void getNextFrameRGB( mrpt::img::CImage& rgb_img, mrpt::system::TTimeStamp& timestamp, bool& there_is_obs, bool& hardware_error, unsigned sensor_id = 0 ); void getNextFrameD( mrpt::math::CMatrix_u16& depth_img_mm, mrpt::system::TTimeStamp& timestamp, bool& there_is_obs, bool& hardware_error, unsigned sensor_id = 0 ); void getNextFrameRGBD( mrpt::obs::CObservation3DRangeScan& out_obs, bool& there_is_obs, bool& hardware_error, unsigned sensor_id = 0 ); static int getNumInstances(); };
Inherited Members
public: // classes class CDevice; // methods CGenericSensor& operator = (const CGenericSensor&); virtual void doProcess() = 0;
Construction
COpenNI2_RGBD360()
Default ctor.
Methods
double getMaxRange() const
Get the maximum range (meters) that can be read in the observation field “rangeImage”.
void enableGrabRGB(bool enable = true)
Enable/disable the grabbing of the RGB channel.
void enableGrabDepth(bool enable = true)
Enable/disable the grabbing of the depth channel.
void enableGrab3DPoints(bool enable = true)
Enable/disable the grabbing of the 3D point clouds.
void open(unsigned sensor_id = 0)
Try to open the camera (all the parameters [resolution,fps,…] must be set before calling this) - users may also call initialize(), which in turn calls this method.
Raises an exception upon error.
Parameters:
std::exception |
A textual description of the error. |
unsigned int openDevicesBySerialNum(const std::set<unsigned>& vSerialRequired)
Open a set of RGBD devices specified by their serial number.
Raises an exception when the demanded serial numbers are not among the connected devices. This function also fills a vector with the serial numbers of the connected OpenNI2 sensors (this requires openning the sensors which are still closed to read their serial)
unsigned int openDeviceBySerial(const unsigned int SerialRequired)
Open a RGBD device specified by its serial number.
This method is a wrapper for openDevicesBySerialNum(const std::set<unsigned>& vSerialRequired) This method requires to open the sensors which are still closed to read their serial.
bool getDeviceIDFromSerialNum(const unsigned int SerialRequired, int& sensor_id) const
Get the ID of the device corresponding to ‘SerialRequired’.
bool start()
Open all sensor streams (normally called automatically at constructor, no need to call it manually).
Returns:
false on error
See also:
void kill()
Kill the OpenNI2 driver.
See also:
bool isOpen(const unsigned sensor_id) const
Whether there is a working connection to the sensor.
void close(unsigned sensor_id = 0)
Close the connection to the sensor (no need to call it manually unless desired for some reason, since it’s called at destructor.
int getNumDevices() const
The number of available devices at initialization.
int getConnectedDevices()
Get a list of the connected OpenNI2 sensors.
This method can or cannot be implemented in the derived class, depending on the need for it.
Parameters:
This |
method must throw an exception with a descriptive message if some critical error is found. |
virtual void initialize()
Initializes the 3D camera - should be invoked after calling loadConfig() or setting the different parameters with the set*() methods.
This method can or cannot be implemented in the derived class, depending on the need for it.
Parameters:
This |
method must throw an exception with a descriptive message if some critical error is found. |
virtual void doProcess()
To be called at a high rate (>XX Hz), this method populates the internal buffer of received observations.
This method will be invoked at a minimum rate of “process_rate” (Hz)
This method is mainly intended for usage within rawlog-grabber or similar programs. For an alternative, see getNextObservation()
Parameters:
This |
method must throw an exception with a descriptive message if some critical error is found. |
This |
method must throw an exception with a descriptive message if some critical error is found. |
See also:
void getNextObservation(mrpt::obs::CObservationRGBD360& out_obs, bool& there_is_obs, bool& hardware_error)
The main data retrieving function, to be called after calling loadConfig() and initialize().
Parameters:
out_obs |
The output retrieved observation (only if there_is_obs=true). |
there_is_obs |
If set to false, there was no new observation. |
hardware_error |
True on hardware/comms error. |
See also:
void setPathForExternalImages(const std::string& directory)
Set the path where to save off-rawlog image files (this class DOES take into account this path).
An empty string (the default value at construction) means to save images embedded in the rawlog, instead of on separate files.
Parameters:
std::exception |
If the directory doesn’t exists and cannot be created. |
void getNextFrameRGB( mrpt::img::CImage& rgb_img, mrpt::system::TTimeStamp& timestamp, bool& there_is_obs, bool& hardware_error, unsigned sensor_id = 0 )
The main data retrieving function, to be called after calling loadConfig() and initialize().
Parameters:
out_img |
The output retrieved RGB image (only if there_is_obs=true). |
timestamp |
The timestamp of the capture (only if there_is_obs=true). |
there_is_obs |
If set to false, there was no new observation. |
hardware_error |
True on hardware/comms error. |
sensor_id |
The index of the sensor accessed. |
out_obs |
The output retrieved observation (only if there_is_obs=true). |
timestamp |
The timestamp of the capture (only if there_is_obs=true). |
there_is_obs |
If set to false, there was no new observation. |
hardware_error |
True on hardware/comms error. |
sensor_id |
The index of the sensor accessed. |
void getNextFrameD( mrpt::math::CMatrix_u16& depth_img_mm, mrpt::system::TTimeStamp& timestamp, bool& there_is_obs, bool& hardware_error, unsigned sensor_id = 0 )
The main data retrieving function, to be called after calling loadConfig() and initialize().
Parameters:
depth_img |
The output retrieved depth image (only if there_is_obs=true). |
timestamp |
The timestamp of the capture (only if there_is_obs=true). |
there_is_obs |
If set to false, there was no new observation. |
hardware_error |
True on hardware/comms error. |
sensor_id |
The index of the sensor accessed. |
void getNextFrameRGBD( mrpt::obs::CObservation3DRangeScan& out_obs, bool& there_is_obs, bool& hardware_error, unsigned sensor_id = 0 )
The main data retrieving function, to be called after calling loadConfig() and initialize().
Parameters:
out_obs |
The output retrieved observation (only if there_is_obs=true). |
there_is_obs |
If set to false, there was no new observation. |
hardware_error |
True on hardware/comms error. |
sensor_id |
The index of the sensor accessed. |
out_obs |
The output retrieved observation (only if there_is_obs=true). |
there_is_obs |
If set to false, there was no new observation. |
hardware_error |
True on hardware/comms error. |
sensor_id |
The index of the sensor accessed. |
See also:
doProcess
static int getNumInstances()
Get the number of OpenNI2 cameras currently open via COpenNI2Generic.