Wismut Labs

Cognition the Wismut Labs engineering blog

Setting up a 3D pedestrian tracking system

The following is a guest post by Daniel Low.

This is the first post of a multi-part series about prototyping a 3D pedestrian tracking system with ZED cameras.

In this post, we will walkthrough on how to setup a 3D tracking system for pedestrians using ZED cameras. This project makes use of the open source project, OpenPTrack (Munaro, Basso and Menegatti, 2016; Munaro, Horn, Illum, Burke and Rusu, 2014; Munaro and Menegatti, 2014). More information about the project can be found here. OpenPTrack provides a quick way to setup a multi-camera pedestrian tracking system.

Fortunately, if you are using hardware and software similar to those used by the OpenPTrack team, you can install a working system by following their instructions. The list of instructions, and the list of supported hardware and software, can be found on their wiki here. However, in our case, we will be using the ZED cameras instead. Other software and hardware are similar with the tested hardware by OpenPTrack (ie. CUDA 7.5, on an Ubuntu 14.04(Xenial) system and Robot Operating System (ROS) Indigo).

Some Dependencies

First, ensure that your device is up to date with the various packages required by OpenPTrack. You may run the following commands to update your device.

$ sudo apt-add-repository universe
$ sudo apt-add-repository multiverse
$ sudo apt-get update && sudo apt-get upgrade
$ sudo apt-get install bash-completion command-not-found locate git gitg vim
$ sudo apt-get install ntp

CUDA

Next, install the appropriate CUDA version according to the hardware which you are working on from this link. The appropriate CUDA version for different hardware is available on the NVIDIA website as well. Depending on which file you download, you need to execute different commands to install it.

#With a run file
$ sudo sh cuda_7.5.18_linux.run 

#With a package
$ sudo dpkg -i cuda-repo-ubuntu1404-7-5-local_7.5-18_amd64.deb
$ sudo apt-get update
$ sudo apt-get install cuda

Follow the on screen instructions and installation should complete successfully. After installation, set the environment paths correctly.

$ echo "export PATH=/usr/local/cuda/bin:$PATH" >> ~/.bashrc
$ echo "export LD_LIBRARY_PATH=/usr/local /cuda/lib:$LD_LIBRARY_PATH" >> ~/.bashrc
$ source ~/.bashrc

You may verify the installation by installing the Cuda samples and running them.

$ cuda-install-samples-7.5.sh  ~ 
$ cd ~/NVIDIA_CUDA-7.5_Samples 
$ cd 1_Utilities/deviceQuery 
$ make 
$ ./deviceQuery

ROS Indigo

Next is to install ROS Indigo and OpenCV 2.0 (included in ROS). You can execute the scripts provided in the OpenPTrack folder for this.

$ cd open_ptrack/scripts
$ chmod +x *.sh
$ ./ros_install.sh
$ echo “source /opt/ros/indigo/setup.bash” >> ~/.bashrc
$ source ~/.bashrc
$ ./ros_configure.sh

The scripts would install some basic ROS Indigo packages which are necessary for OpenPTrack. However, if you find that the scripts are unable to be completed successfully, the commands entered by the script are broken down below. Please note that there are additional packages installed, along with PCL, as this is to prevent errors which maybe encountered when running OpenPTrack.

Furthermore, some commands have been skipped compared to the scripts provided by OpenPTrack, as the commands install dependencies or configure settings for the Kinect camera, which we are not using. Do not execute the commands below if the OpenPTrack scripts have worked for you successfully.

#ROS Indigo
$ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
$ sudo apt-key adv --keyserver hkp://pool.sks-keyservers.net --recv-key 0xB01FA116  
$ sudo apt-get update 
$ sudo apt-get install ros-indigo-desktop
$ sudo apt-get install python-rosinstall
$ sudo rosdep init
$ rosdep update
$ echo "source /opt/ros/indigo/setup.bash" >> ~/.bashrc
$ source ~/.bashrc

#PCL
$ sudo add-apt-repository ppa:v-launchpad-jochen-sprickerhof-de/pcl
$ sudo apt-get update
$ sudo apt-get install libpcl-all

#Necessary ROS packages
$ sudo apt-get install ros-indigo-compressed-depth-image-transport ros-indigo-compressed-image-transport ros-indigo-pcl-ros ros-indigo-camera-info-manager ros-indigo-driver-base ros-indigo-calibration ros-indigo-image-view 
$ sudo apt-get install ros-indigo-robot-state-publisher ros-indigo-cmake-modules ros-indigo-freenect-stack ros-indigo-openni-launch ros-indigo-camera-info-manager-py

#Make Catkin workspace (OPT uses ~/workspace/ros/catkin)
$ mkdir -p ~/workspace/ros/catkin/src
$ cd ~/workspace/ros/catkin
$ catkin_make --force-cmake
$ mkdir -p ~/workspace/ros/rosbuild
$ rosws init ~/workspace/ros/rosbuild ~/workspace/ros/catkin/devel
$ echo "source ~/workspace/ros/rosbuild/setup.bash" >> ~/.bashrc
$ echo "export LC_ALL=C" >> ~/.bashrc
$ source ~/.bashrc

OpenCV3

Next, the ZED SDK requires at least OpenCV3 or later to run. However, it cannot conflict with OpenCV2, which is already installed. Hence, you have to install it into a separate path and make sure to link the libraries.

$ cd
$ sudo apt-get update

#install dependencies
$ sudo apt-get install libopencv-dev build-essential checkinstall cmake pkg-config yasm libtiff4-dev libjpeg-dev libjasper-dev libavcodec-dev libavformat-dev libswscale-dev libdc1394-22-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev libv4l-dev python-dev python-numpy libtbb-dev libqt4-dev libgtk2.0-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev x264 v4l-utils

$ sudo add-apt-repository ppa:mc3man/trusty-media
$ sudo apt update && sudo apt dist-upgrade
$ sudo apt-get install ffmpeg  
$ sudo apt-get install frei0r-plugins  

#the new directory to install OpenCV3 in
$ sudo mkdir /usr/local/opencv3
$ mkdir OpenCV  
$ cd OpenCV  
$ git clone https://github.com/opencv/opencv.git
$ cd opencv 
$ git checkout tags/3.1.0
$ mkdir release  
$ cd release  
$ cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local/opencv3 -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1 -D WITH_CUBLAS=1 ..

$ make –j$(nproc)
$ sudo make install

$ echo “#OpenCV3” >> ~/.bashrc  
$ echo “PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/opencv3/lib/pkgconfig” >> ~/.bashrc  
$ echo “export PKG_CONFIG_PATH” >> ~/.bashrc  
$ source ~/.bashrc  

ZED SDK

Now that you have the various dependencies and libraries installed for OpenPTrack, you need to install the ZED SDK and wrapper before installing OpenPTrack. You may download the ZED SDK off their website, however, please download version 1.1.0, as later versions are only compatible with ROS kinetic or later (and not indigo).

The file downloaded is a run file, so you can install it similar to the CUDA example. Ensure that you have the ZED camera plugged into a USB 3.0 port before beginning the installation. After installation, test the ZED Depth Viewer, Zed Explorer, and ZED Diagnostics to check for any errors. These programs can be found in the /usr/local/zed/tools folder.

If you encounter the error libopencv_core.so.3.1 cannot be found, link the library using the following steps:

# Find where the libopencv_core.so.3.1 file is on your system.
# Create a file called /etc/ld.so.conf.d/opencv.conf and write to it the paths to the folders where the libraries
# are stored, one per line. ie. /usr/local/opencv3/lib (it is stored in that folder for us)
# Once done, execute the 2 commands below
$ sudo ldconfig -v
$ echo "export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/opencv3/lib >> ~/.bashrc"

Run the SDK installer again and check Explorer and Depth Viewer can be run without issues. These 2 programs would allow you to see the video from the ZED camera and see the depth map. Now, you can use the ZED camera with your computer!

ZED-ROS-WRAPPER

For OpenPTrack to access our ZED camera, we need to install the ROS wrapper for the ZED SDK. You may find it in this link. The wrapper consists of launch files which would allow you to publish various information from the ZED camera to the ROS network.

The wrapper version you are installing is v1.0.0, which is the latest compatible version for ZED SDK v1.1.0 (the next latest version is v1.2.0 which requires ROS kinetic as well). To install the wrapper, download it and place it inside the catkin src folder (~/workspace/ros/catkin/src in our case). Then, execute the following commands to compile it.

$ cd ~/workspace/ros/catkin/src
$ catkin_make
$ source ./devel/setup.bash

This should complete without any errors. To test if the wrapper is installed properly, you may run it with the following commands.

$ roslaunch zed_wrapper zed.launch
#in a new terminal window
$ rosrun image_view image_view image:=/camera/rgb/image_rect_color

If you see a new window appear which displays the video feed from the ZED camera, your ROS wrapper has been installed successfully.

However, for our purpose, we would need to edit certain files for the wrapper to be compatible with the OpenPTrack software. There is a branch of ZED-ROS Wrapper that can be downloaded and compiled with the changes made available for download here. The main changes are as follows:

  1. Change the orientation of the pointcloud in zed_wrapper_node.cpp by replacing lines 240-242 with the following code.
    point_cloud.points[i].x =  cloud[index4++];
    point_cloud.points[i].y = -cloud[index4++];
    point_cloud.points[i].z = -cloud[index4++];
    

    This would allow the point cloud to display properly when viewing it through RVIZ or using OpenPTrack’s manual ground plane estimation program.

  2. In the zed.launch file, there are some parameters which are used by the ZED camera. Change quality to 3 and resolution to 0 or 1. This would improve the resolution of the point cloud and video feed captured by the ZED camera, improving the overall tracking of the system using the ZED camera.

After you have made these changes, or downloaded the edited branch of the wrapper, recompile the wrapper and test to see if it works still.

OpenPTrack

Finally, we can install OpenPTrack. Download the OpenPTrack program from their github page and place it insite the catkin src folder (~/workspace/ros/catkin/src for us). Before compiling the program and installing it, we need to add some files to allow our ZED camera to interface with OpenPTrack software.

Similar to the wrapper, there is a branch of OpenPTrack with the files configured for a ZED camera, which you may download here. You may download this version of OpenPTrack to install, or make the following changes to your files:

  1. Change any instance of find_package(OpenCV required) to find_package(OpenCV 2.4 required) in any CMakeList.txt files. This is to ensure OpenPTrack picks the OpenCV 2.4 library instead of the OpenCV3 library we have installed as well.

  2. Create a ground_based_people_detector_zed.yaml file in ~/workspace/ros/catkin/src/open_ptrack/detection/conf. This is just a config file which can be created by copying the config files for kinect2 in the same folder.

  3. Create a detector_depth_zed.launch file in ~/workspace/ros/catkin/src/open_ptrack/detection/launch. This file is to allow us to launch the zed camera and subscribe to the various topics published by our ZED camera.
    It also loads the detector package in OpenPTrack to allow us to begin pedestrian detection. To create this file, you may take reference from the other detector launch files in the same folder. The completed file would be as follows:
    <launch>
       <include file="$(find zed_wrapper)/launch/zed.launch"/>
       <!-- Launch ground based people detection node -->
       <node pkg="detection" type="ground_based_people_detector" name="ground_based_people_detector" output="screen" required="true">
         <rosparam command="load" file="$(find detection)/conf/ground_based_people_detector_zed.yaml" /> 
         <param name="classifier_file" value="$(find detection)/data/HogSvmPCL.yaml"/>
         <!-- <param name="pointcloud_topic" value="/camera/depth_registered/points"/> -->
    
         <param name="camera_info_topic" value="/camera/rgb/camera_info"/>
         <param name="output_topic" value="/detector/detections"/>
         <param name="pointcloud_topic" value="/camera/point_cloud/cloud"/>
         <param name="rate" value="60.0"/>  
       </node>
    
    </launch>
    
  4. Create a detection_and_tracking_zed.launch file in ~/workspace/ros/catkin/src/open_ptrack/tracking/launch. This would be the file we would execute with roslaunch to begin the pedestrian tracking program using our ZED camera. Hence, it would need to launch our ZED camera, the detection node, the tracking node, and the various messaging and visualization nodes needed by OpenPTrack. Similar to the previous file, the contents of this file can be determined by taking reference from the other detection_and_tracking launch files inside the same folder. The completed file is as follows:
    <launch>
      <!-- People detection -->
      <include file="$(find detection)/launch/detector_depth_zed.launch"/>
      <!-- People tracking -->
      <include file="$(find tracking)/launch/tracker.launch"/>
       
      <!-- UDP messaging -->
      <include file="$(find opt_utils)/launch/ros2udp_converter.launch"/>
    
      <!-- Visualization -->
      <include file="$(find opt_utils)/launch/visualization.launch"/>
    
    </launch>
    

    Now, we can install OpenPTrack by executing the openptrack_install.sh script inside the ~/workspace/ros/catkin/src/open_ptrack/script folder. The script may need to be run twice to install properly. However, if the script does not execute successfully, you may install OpenPTrack by executing the following commands

    $ cd ~/workspace/ros/catkin/src/open_ptrack/scripts
    $ chmod +x *.sh
    $ ./calibration_toolkit_install.sh
    $ cd ~/workspace/ros/catkin
    $ catkin_make --pkg calibration_msgs
    $ catkin_make --pkg opt_msgs
    $ catkin_make --force-cmake
    

    If it compiles successfully, you may move on to trying out the pedestrian tracking system!

Testing and Troubleshooting

After successfully installing OpenPTrack, execute the following command, with the ZED camera plugged in, to begin the program.

$ roslaunch tracking detection_and_tracking_zed.launch

A Rviz window should appear which would show coloured markers of the pedestrians detected and the path the pedestrians have moved in. Another window (the detection window) should appear showing the video feed from the ZED camera, and marking out detections with retangles in real time.

You may run the following command in another terminal to tweak your tracking results

$ rosrun rqt_reconfigure rqt_reconfigure

This command brings up a real-time reconfiguration window, which would read from the configurations files you have loaded for the ZED camera and display all the settings those files contain. You may change the settings in this window, and observe if the tracking improves or deteriorates in real time.

This would allow you to fine tune your tracking results without having to restart the node each time you change a setting. However, please note that the settings would not be saved when you kill the detection_and_tracking_zed node.

You may want to take a screenshot of the settings you have changed before killing the node. You may input the new settings into the ground_based_people_detector_zed.yaml file you have created previously (in ~/workspace/ros/catkin/src/detection/conf). This file is read each time you launch the node, allowing you to apply your new settings the next time you start up the tracking and detection process.

A common issue would be a blank detection window. If the detection window is blank and remains blank, try the following steps:

  1. open roi_viewer.cpp in ~/workspace/ros/catkin/src/open_ptrack/opt_utils/apps
  2. Add waitKey(1); after the line cv::imshow(“Detections”, cv_ptr->image); (about line 226)
  3. Save the file and recompile the program again using catkin_make
  4. Try the launch file again.

This should allow detection window to display the all the detections registered by the OpenPTrack software.

Another common mistake would be subscribing or publishing to different topics, such that the information captured by the ZED camera is not passed on to the software for detection and tracking. This is likely the case if there are completely no detections shown by the program even when there is a single person standing in full view and unobstructed in front of the ZED camera.

Apart from manually checking your launch files (currently there is only 3 so it can easily be checked) for the published and subscribed topics, you may use the rqt_graph plugin to check how all your nodes are connected and which topics are they publishing/subscribed to.

All in all, this write up has covered how to implement the OpenPTrack system using a ZED camera with Ubuntu 14.04.

References

M. Munaro, F. Basso and E. Menegatti. (2016). OpenPTrack: Open Source Multi-Camera Calibration and People Tracking for RGB-D Camera Networks. Journal on Robotics and Autonomous Systems, vol. 75, part B, pp. 525-538, Elsevier, 2016.

M. Munaro, A. Horn, R. Illum, J. Burke and R. B. Rusu. (2014). OpenPTrack: People Tracking for Heterogeneous Networks of Color-Depth Cameras. In IAS-13 Workshop Proceedings: 1st Intl. Workshop on 3D Robot Perception with Point Cloud Library, pp. 235-247, Padova, Italy, 2014.

M. Munaro and E. Menegatti. (2014). Fast RGB-D People Tracking for Service Robots. Journal on Autonomous Robots, vol. 37(3), pp. 227-242, Springer, 2014.

Read Part 2: Implementing OpenPTrack on Ubuntu 16.04
Read Part 3: Multi-camera setup with OpenPTrack