Configuration

A setup with multiple Kinects/Asus Xtions

Overview

Here we will describe how to create a setup with multiple Kinects/Asus Xtion Pro Lives on several computers. For simplicity, we will just call them Kinects from now on for either type of camera. We will explicitly state if a step is specific to one of the cameras.

We will distinguish a computer that will not be in the fixed setup, but will be used to control the system as a “control computer”.
And the “system computers” that will be in the fixed setup, running an important part of ROS and/or have a connected Kinect.

It is important to note that only 1 Kinect can be connected per USB-controller. So if you want to connect multiple Kinects to 1 computer, you need multiple USB-controllers in that computer. In this setup however, we work with only 1 Kinect.

In this overview, we are working on Ubuntu 12.10.

Cluster SSH

Installation

Because we are planning to use multiple computers in our setup, it will be useful to be able to send a command to all the computers at the same time. This can be done by cluster ssh. With that we will start 1 ssh session on each computer, controlled by 1 terminal that will receive our commands. The terminal will send the command to all the computers so it will execute exactly the same commands.
For this, we install cluster ssh on the control computer.

sudo apt-get install clusterssh

Next we setup our bash to easily start the connection to all system computers.
create .bash_functions and put the following code in it: (via “nano -w .bash_functions” or something similar)
This will create a new command “sshros” that will connect to all computers at once.

#!/bin/bash

function sshros {
  cssh -l <ip address 1> <ip address 2> <...>
}

E.g.:

#!/bin/bash

function sshros {
 cssh -l 192.168.10.101 192.168.10.102 192.168.10.103 192.168.10.104
}

Now we need to load the .bash_functions in .bashrc so we add the following code to .bashrc

if [[ -f ~/.bash_functions ]]; then
  . ~/.bash_functions
fi

After this, you should restart your terminal (or execute “source ~/.bashrc”)

Public key

As you probably are going to log on a lot, skipping the login can save you some time. To do that we create a public/private key pair for authentication. The idea is that the control computer will have a secret key with which it can sign its connection. The system computers will use the public key of the control computer to check if the signature is indeed that of the control computer.
The next steps are based on http://www.linuxproblem.org/art_9.html

First we create the key pair. In a console on the control computer (A) execute: (don’t use a passphrase)

ssh-keygen -t rsa

Next we will start a cluster of consoles to copy the public key of the control computer to all system computers.

sshros # your password for the system computers will be asked

In the cluster console (the one to control all consoles):

mkdir -p .ssh
ssh <controluser>@<controlcomputer> 'cat .ssh/id_rsa.pub' | cat >> .ssh/authorized_keys

From now on, there should be no password if you log in from computer A.

Chrony

ROS will need synchronised clocks for all computers in the system. Therefore it is recommended to install Chrony.
The installation procedure we will briefly describe next is detailed in https://wiki.archlinux.org/index.php/Chrony

First install chrony

sudo apt-get install chrony

Next, put your ntp servers in your /etc/chrony.conf file

server 0.pool.ntp.org
server 1.pool.ntp.org
server 2.pool.ntp.org
server 3.pool.ntp.org

Make sure chrony is started at boot.

Install ROS

This will be done on your system computers. (You probably also want to do it on your control computer)
To do it on your system computers, start the cluster console with “sshros”, and execute the next commands in it.

According to : http://www.ros.org/wiki/groovy/Installation/Ubuntu

For Ubuntu 12.10

First the ros repository is added and then the install of ros is done.

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu quantal main" > /etc/apt/sources.list.d/ros-latest.list'
wget http://packages.ros.org/ros.key -O - | sudo apt-key add -
sudo apt-get update
sudo apt-get install ros-groovy-desktop-full

If the install has some problem relating to yaml, you should try the next 4 lines (as proposed in http://answers.ros.org/question/51806/install-of-groovy-on-ubuntu-1210-problem

sudo apt-get remove ^ros*
sudo apt-get remove yaml-cpp
sudo apt-get autoremove
sudo apt-get install ros-groovy-desktop-full

Otherwise skip the previous 4 steps, and continue with the following lines:

echo "source /opt/ros/groovy/setup.bash" >> ~/.bashrc
. ~/.bashrc

This will make sure the setup.bash of ros is run for every console you open.

Extra packages

Some extra packages that will come in handy:

Ros-install

sudo apt-get install python-rosinstall

Ros-dep

sudo apt-get install python-rosdep
sudo rosdep init
rosdep update

Openni

sudo apt-get install ros-groovy-openni-*

Custom packages

Get the packages created by us here. And put them in ~/ros/multirgbd/.

Add the packages to your package path:

echo "export ROS_PACKAGE_PATH=~/ros/camera_pose:~/ros/multirgbd:${ROS_PACKAGE_PATH}" >> ~/.bashrc
source ~/.bashrc
rosmake contrast

Configure

There will be 1 master computer that will run roscore. It can be a computer with or without camera. Here we will refer to that IP-address as <rosmasterip>. So replace it in the next command by the real IP-address.

echo 'ROS_MASTER_URI="http://<rosmasterip>:11311"' >> ~/.bashrc

For every machine we will need to configure a name. We will do this in the environment variables and also put it in .bashrc. Here we will give every machine a number, but you can use whatever you want, as long as it is different for every machine. This name will be appended to other names (I.e.: camera_<machinename>/rgb/image_rect).

echo "export MACHINENAME=<machinename>" >> ~/.bashrc

Replace “<machinename>” with a unique name for every machine.

Intrensic calibration

Before going to the next step make sure all changes to bashrc are loaded by executing:

source ~/.bashrc

The next part is based on : http://www.ros.org/wiki/openni_launch/Tutorials/IntrinsicCalibration
This has to be done on each computer with a Kinect attached. It needs to be done for each RGB and for each IR camera of the Kinect.
RGB:

rosrun camera_calibration cameracalibrator.py image:=/camera_$MACHINENAME/rgb/image_mono camera:=/camera_$MACHINENAME/rgb --size 6x7 --square 0.108

and IR:

rosrun camera_calibration cameracalibrator.py image:=/camera_$MACHINENAME/ir/image_raw camera:=/camera_$MACHINENAME/ir --size 6x7 --square 0.108

Because the $MACHINENAME is an environment variable configured in our .bashrc this command will work for every system computer.

Extrensic calibration

At the time of writing, the camera_pose_calibration package doesn’t work for us. There is a patched version that can be gotten from:
https://github.com/jbohren-forks/camera_pose.git
We will put it in a separate path:

mkdir ~/ros
cd ~/ros
git clone https://github.com/jbohren-forks/camera_pose.git

Next we need to add the package to the package path of ROS in our .bashrc and build the new camera_pose package.

echo "export ROS_PACKAGE_PATH=~/ros/camera_pose:${ROS_PACKAGE_PATH}" >> ~/.bashrc
source ~/.bashrc
rosmake camera_pose

At the moment, there also seems to be a bug that causes the checkerboard not to be detected in the IR-images. A workaround to increase the contrast is included in the provided packages. More information about the workaround can be found under Contrast Augmenter.

Now we have a working camera_pose package. Next up, we start the calibration (based on
http://www.ros.org/wiki/openni_launch/Tutorials/ExtrinsicCalibration)
For ease of use, we use 4 terminals (called A, B, C and D here)
Terminal A : Start the camera

cd ~/ros/multirgbd
rosparam set /use_sim_time false
sh restartcams.sh

Terminal C : Start recording of the RGB image

rosbag record --limit=900 -O rgb /camera_$MACHINENAME/rgb/image_rect /camera_$MACHINENAME/rgb/camera_info

Terminal A : It seems we need to restart the camera to use the sim_time. So we first stop the running camera by pressing ctrl+c and then execute:

rosparam set /use_sim_time true
sh restartcams.sh

Terminal B

rosrun contrast contrast_augmenter image:=/camera_$MACHINENAME/ir/image_raw

Terminal C

rosbag play rgb.bag --clock /camera_$MACHINENAME/rgb/camera_info:=/camera_$MACHINENAME/rgb_bag/camera_info /camera_$MACHINENAME/rgb/image_rect:=/camera_$MACHINENAME/rgb_bag/image_rect

Terminal D

roslaunch camera_pose_calibration calibrate_2_camera.launch camera1_ns:=/camera_$MACHINENAME/rgb_bag camera2_ns:=/camera_$MACHINENAME/ir_augmented checker_rows:=6 checker_cols:=7 checker_size:=0.108

For the moment a static publisher is used because the camera_pose_calibration results get overwritten by the calibrations of the other cameras.

To get the calibration results to add to the publisher the following command is done:

rostopic echo /camera_calibration

This returns something like:

camera_pose:
-
position:
x: 2.45231791679e-08
y: 2.56790520354e-08
z: -9.33343368125e-08
orientation:
x: 5.26699865725e-09
y: -5.49261936085e-09
z: -1.19152267898e-08
w: 1.0
-
position:
x: -0.0234595813715
y: 0.00172405521122
z: 0.024678377029
orientation:
x: 0.00971381883673
y: 0.00148809839517
z: -0.000950191515884
w: 0.999951261024
camera_id: ['/camera_c1_rgb_optical_frame', '/camera_c1_depth_optical_frame']
time_stamp:
-
secs: 1363620079
nsecs: 994782214
m_count: 1
---

This has to be put in the the transform publisher that is provided. The publisher contains already some transforms as an example. To be used however, all pre-existing transforms should be removed and replaced by your own results. You can find the publisher under “~/ros/multirgbd/transform/src/tfpublish.cpp”.

Which will translate to the publisher as:

transform.setOrigin( tf::Vector3(2.45231791679e-08, 2.56790520354e-08, -9.33343368125e-08) );
transform.setRotation( tf::Quaternion(5.26699865725e-09, -5.49261936085e-09, -1.19152267898e-08, 1.0) );
br.sendTransform(tf::StampedTransform(transform, stamp, "/camera_c1_link", "/camera_c1_rgb_optical_frame"));
transform.setOrigin( tf::Vector3(-0.0234595813715, 0.00172405521122, 0.024678377029) );
transform.setRotation( tf::Quaternion(0.00971381883673, 0.00148809839517, -0.000950191515884, 0.999951261024) );
br.sendTransform(tf::StampedTransform(transform, stamp, "/camera_c1_link", "/camera_c1_depth_optical_frame"));

Replace all “c1″‘s in the code by the machinename you configured for the machine you are calibrating. (In /camera_c1_link, /camera_c1_rgb_optical_frame and /camera_c1_depth_optical_frame)

After all calibration is done, you can compile the publisher and test it by executing:

rosmake transform
rosparam set /use_sim_time false # make sure you are not running in sim_time any more.
sh restartcams.sh

rosrun transform tfpublisher[/code]Now you can visualise the pointclouds in rviz to see how they match.
Of course, you can do this after every calibration.

Problems

Some problems we encountered
problem: camera_pose_calibration gives “timeout waiting for interval”
solution: get package from https://github.com/jbohren-forks/camera_pose.git, make sure it is in your ROS_PACKAGE_PATH and it is compiled. (roscd camera_pose should bring you to that directory)

problem : camera_pose_calibration detects the board, but doesn’t do any optimisation. (In our case, there were not filtered intervals)
solution : “rosparam set /use_sim_time true” probably is not done correctly (restart camera, rosparam set /use_sim_time true, start the contrast_augmenter, play the bag, run calibration)

problem : Couldn’t get measurement in interval
solution : Lighting is not good/board is not good enough visualised. (Put the board closer, straighter,…)
http://www.ros.org/wiki/kinect_calibration/technical

Extrinsic Kinect/Kinect Calibration

To calibrate a Kinect to Kinect camera the ‘camera_pose_calibration’ package is available. http://www.ros.org/wiki/camera_pose_calibration/

To be able to record the calibration data, first you have to put some values on true:

  • First, in the camera.launch file, put the ‘update_calibration’ value to true. This will make the system register the calibration.
  • Secondly, in the .bashrc file, set the ‘PUBLISHTF’ value to 1. This will make the system publish the calibration results.

Now you can start the calibration procedure:

roslaunch camera_pose_calibration calibrate_2_camera.launch camera1_ns:=camera_$MACHINENAME/rgb camera2_ns:=/camera_$MACHINENAME/rgb checker_rows:=6 checker_cols:=7 checker_size:=0.108

To get the calibration results to add to the publisher, the following command is used:

rostopic echo /camera_calibration

This returns something like:

camera_pose:
-
position:
x: -3.63658955385e-08
y: -4.8603075272e-09
z: -1.58685121596e-07
orientation:
x: -7.94426628579e-09
y: 1.55851977768e-08
z: -1.8963585773e-08
w: 1.0
-
position:
x: 3.2910219237
y: 1.32708138614
z: 1.42112952331
orientation:
x: 0.0347380543169
y: -0.473437925891
z: 0.383153612484
w: 0.792365513605
camera_id: ['/camera_c1_rgb_optical_frame', '/camera_c2_rgb_optical_frame']
time_stamp:
-
secs: 1364897123
nsecs: 226453318
m_count: 1
---

Which will translate to the publisher as:

transform.setOrigin( tf::Vector3(3.2910219237, 1.32708138614, 1.42112952331) );
transform.setRotation( tf::Quaternion( 0.0347380543169,-0.473437925891,  0.383153612484,  0.792365513605) );
br.sendTransform(tf::StampedTransform(transform, stamp, "/camera_c1_link", "/camera_c2_link"));

Replace c1 and c2 by the machinenames of the machines you just calibrated.

Bring the system out of calibration mode by setting the following things:

  • In the camera.launch file, put the ‘update_calibration’ value to false. We don’t want any calibration to be registered anymore.
  • In the .bashrc file, set the ‘PUBLISHTF’ value to 0. The transform publisher will publish the transforms so the calibration program shouldn’t do this anymore. (Otherwise we will get conflicting transforms)

AR_Pose

To be able to filter out the unwanted information from outside the wanted capturing volume a cropbox filter is needed. And to use this filter properly, we need some real world coordinates.
We are going to mark one corner of our cropbox as the zero coordinate of the world. To do this we will use AR_Pose to mark the base of the world coordinates.

The installation is based on http://www.ros.org/wiki/ccny_vision#Installing

Put the launchfile under ar_pose/launch/. This should at least be done for the computer with the camera that will register the reference point, but can be done for all computers. There is no strict rule from which camera the reference point should be taken. We took the first camera, but you can chose.
A AR marker is needed. The launch file is configured for the Hiro marker with sides of 202.5mm, but you can change it to whatever you like by changing the appropriate parameters in the launch file. (E.g.: If you use an other size of marker, change “marker_width”).

Put this marker where you want the reference of your world coordinates to be. With our standard configuration this is the centre of the first module of 4 cameras.

Make sure the transform publisher is not running, otherwise the calibration will not work and run:

roslaunch ar_pose worldref.launch

While the marker is being detected, you can get the transform with:

rostopic echo /tf

The frame_id will be “/world” and the child_frame_id will be “/camera”. As before, the translation vector and the quaternion should be added in the tfpublish.cpp like:

transform.setOrigin( tf::Vector3(-1.07098312974, 2.8015400268, 3.1980637327) );
transform.setRotation( tf::Quaternion(0.820193214334, 0.249649552025, 0.108721317642, 0.503128082525) );
br.sendTransform(tf::StampedTransform(transform, stamp, "/world", "/camera_c1_link"));

Here you also have to replace the “c1” with the machinename of the computer your reference camera is attached to.

The cropbox is configured for two 1,8m x 1,8m x 1,5m (w x d x h) modules. If you want to change this, you can do this (for the moment only) in the code of multirgbd/filters/src/cropbox.cpp. (size_x, size_y, size_z)

Result

Now you should have a working multi RGB-D camera setup that you can visualise in rviz.

rosrun rviz rviz

You can select the point clouds you want in a PointCloud2 data-type.

  • /camera_<machinename>/depth_registered/points shows the raw coloured point cloud.
  • /camera_<machinename>/depth_filtered/points shows the coloured point cloud filtered by the cropbox.

If any problems arise, please let us now via a comment.

Advertisements
  1. How to visualize point clouds from 2 kinects simultaneously in RViz? I have connected 2 kinects in 2 different USB controllers.

    • Are both cameras running? I.e.: can you visualise them separately in rviz?

      • Yes. Both kinects are running with namespaces /camera and /camera2. I can view the clouds from the kinects individually by switching the fixed frame between /camera_depth_optical_frame and /camera2_depth_optical_frame.

        • Do you get an error message in rviz?
          My first guess is that there probably is no transformation between the frames of both cameras. Did you finish the calibration procedure without problems?
          You might want to check your tf tree by using the command “rosrun tf view_frames && evince frames.pdf”. Do you see a link between both cameras in the tree?

  2. Hi,

    I’m proceeding with kinect extrinsic calibration and…
    I’ve installed ROS groovy on ubuntu 12.10, and I’ve followed the guide, but I cannot find the script:

    restartcams.sh

    Please can you give me some advices, or more detailes about the whole process?

    Thanks
    Alexandro

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: