View on GitHub

Multi tracker

Multiple object tracking for single camera in ROS

Download this project as a .zip file Download this project as a tar.gz file

Multi tracker

Multi tracker is a basic ROS package for real time tracking multiple objects in 2D. Primary testing has been on walking fruit flies. Only basic object-object interaction is supported by splitting objects that are larger than a specified size into two objects (thus three objects coming together will only be seen as 2 objects). Adjusting thresholds, and the image processing function can help improve robustness. System works reliably on a high end desktop from 2012 to track 10+ objects. The code supports multiple tracking instances on the computer through the "nodenum" option that is available on all the tracking nodes.

Below is a video that shows raw footage on the left, and the realtime output from the liveviewer.py node on the right.

The package was built and tested with point grey usb firefly cameras on an Ubuntu (12.04) system, and Basler GigE cameras using the camera aravis driver on 12.04 and 14.04. However, there is no reason that it shouldn't work with other cameras.

Installing

Install ROS, if you have not already done so (tested with ros kinetic, full desktop install): http://wiki.ros.org/kinetic/Installation/Ubuntu

Setup your catkin workspace, if you have not already done so: http://wiki.ros.org/catkin/Tutorials/create_a_workspace

Then clone the git repository and run catkin make:

$ cd ~/catkin_ws/src
$ git clone https://github.com/florisvb/multi_tracker.git
$ cd ~/catkin_ws
$ catkin_make

Cameras

Point grey usb camera

Install the appropriate camera driver, such as: http://wiki.ros.org/pointgrey_camera_driver. To talk to the camera, you may need a udev rule. There is an example udev rule for a point grey firefly camera in the rules folder. Move this file to /etc/udev/rules.d directory.

Basler GigE camera / Camera Aravis

See aravis_install_notes

Analysis

To use the analysis tools: from inside multi_tracker, run python ./setup.py install. You may want to do this in a virtual environment: http://docs.python-guide.org/en/latest/dev/virtualenvs/

You may need to install some of my other open source packages, e.g. DataFit, and FlyPlotLib

Analysis tools currently rely on pandas and hdf5 file formats.

Overview

Parameters

(examples are found in the /demo folder)

camera_parameters.yaml: specifies key camera parameters, such as framerate, exposure time, etc. These parameter names may be camera brand dependent. The camera parameters can also be specified by running rosrun rqt_reconfigure rqt_reconfigure

tracker_parameters.yaml: specifies various tracking related parameters

data_association_parameters.yaml: specifies various data association related parameters

kalman_parameters.py: specifies the kalman parameters

Nodes

tracker_simplebuffer.py

listens to /camera/image_mono (or whatever topic is specified in tracker_parameters.yaml), and uses the image processing routine specified in tracker_parameters.yaml to find the outer most contours of objects in the tracking space. Publishes a list of contours each frame, including x,y position, angle, and area.

data_association.py

listens to the /multi_tracker/n/contours topic and runs a simple kalman filter to do data association and filtering between subsequent frames. Publishes tracked objects to /multi_tracker/n/tracked_objects

save_data_to_hdf5.py

listens to /multi_tracker/n/tracked_objects and saves the data to an h5 file. Alternatively, one can use "rosbag record" to record the data in ros format, which can replayed at a later time. Very rarely, not not never, does this node crash. For critical data, I recommend using rosbag record as a backup.

liveviewer.py

Listens to the camera topic specified in liveviewer_parameters.yaml, and overlays the tracked objects from the /multi_tracker/n/tracked_objects. Use the topic /multi_tracker/n/processed_image to see the tracking overlaid on the processed image from which the contours are calculated. This can help with debugging and optimizing the tracker parameters.

delta_video_simplebuffer.py

Listens to the camera topic specified in delta_video_parameters.yaml, and saves the pixels and values for any pixels that change more than the specified threshold. This results in a dramatically compressed filesize relative to a full resolution video. Artifacts are kept at an absolute minimum. Using a threshold of 10, and tracking 10 flies, 20 hours of 30 fps video comes to 20-40 GB.

delta_video_player.py

This node will replay a saved delta_video stream.

save_bag.py

Automatically starts and stops saving a rosbag file. Requires a config file. See raw_data_bag_config.py in the demo for a template.

republish_pref_obj_data.py

Republishes data from the oldest tracked object using a standard ROS message type (subject to change!). This is useful if doing a closed loop experiment with a single target.

Running

Minimal steps to run:

  1. copy the /demo folder to your home directory
  2. get a camera running on the ROS network (see Cameras above), note the image topic it is publishing to
  3. if the camera_image topic is not /camera/image_raw, edit the /demo/demo_1/src/tracker_parameters.yaml file so that /multi_tracker/1/tracker/image_topic matches the image_topic. For color cameras, it is best to use the camera/image_mono topic, if it exists.
  4. from inside ~/demo/demo_1/src folder, run "roslaunch tracking_launcher.launch" This will load all the yaml (parameter) files, and launch the tracker, data_association, save_hdf5_data, and liveviewer nodes.
  5. Hit control-c to stop the node (and cease collecting data).

Now you can try editing some of the contents of the yaml files to change the file structure and tracking parameters.

The two demo folders (demo_1 and demo_2) are there to illustrate how you can run two instances on one computer in parallel. You can even run them on the same camera feed, using ROI's to split a single camera feed into two experiments.

Image Processing

The code supports externally defined image processing functions, so you can write your own python image processing function, and specify the tracker to use that, rather than the default functions included in this package. To define your own image processing functions, set the parameter /multi_tracker/1/tracker/image_processing_module to be the exact path to your python file, and /multi_tracker/1/tracker/image_processor to the name of the function within that file.

To write your own image_processing function, look at the incredibly_basic function in image_processing.py, and start with this as a template. Don't forget to import some of the ROS specific stuff:

from multi_tracker.msg import Contourinfo, Contourlist from multi_tracker.msg import Trackedobject, Trackedobjectlist from multi_tracker.srv import resetBackgroundService

Alternatively, you can edit the image_processing.py file in multi_tracker/nodes, and add your function there. Then set /multi_tracker/1/tracker/image_processor to the name of your function.

Lighting

For our experiments, we typically use infrared lighting, which is invisible to human and most animal eyes, however, cameras (with the IR blocking filter removed) are exceptionally sensitive to this wavelength. This allows us to (a) image the animals independent of ambient lighting, and (b) independent of any visible light motion that may be used as an experimental cue. Infrared panels can be purchased, however, they are excessively expensive (~$800-1200). Below are instructions on how to make one for <$50.

Lighting panel

Lighting parts list

note: parts ship from China, express shipping takes ~1-2 weeks to Los Angeles

Cut the reel at the designated cut points and attach the connectors to the appropriate +/- terminals to link multiple strips. In the example below, I have attached the strips to a sheet of laser cut acrylic, and use two additional pieces of acrylic to prevent the connector cables from moving.

You can find simple solidworks templates for the base plate to which the LED's are stuck, as well as the arena walls, in the solidworks folder.

Lighting panel 2

Replaying Delta Video Movies

Inside the multi_tracker/examples/sample_data you will find a delta_video bag file, along with its associated background image(s), and some launch files. The delta video bag file contains positions and values for each (x,y) pixel for each frame that are different from the background image by more than a threshold value (which was defined when the video was recorded). In cases where the background changes by a signficant amount (e.g. 20%), a new background image is saved.

To manually replay the delta video bag file as a standard ROS image topic, run the following commands, each in their own terminal window:

roscore
rosparam set use_sim_time true
rosbag play [BAG FILENAME] --clock --pause
rosrun multi_tracker delta_video_player --in=[DELTA VIDEO TOPIC, e.g. /multi_tracker/1/delta_video] --out=[RECONSTRUCTED CAMERA TOPIC, e.g. /camera/image_raw] --directory=[DIRECTORY WHERE BACKGROUND IMAGES ARE FOUND - FULL PATH REQ'D]
rosrun image_view image_view image:=[RECONSTRUCTED CAMERA TOPIC, e.g. /camera/image_raw]
Quick start example

Inside multi_tracker/examples/sample_data edit launch_delta_video.launch such that the line defining the default path matches the actual full path where the sample data is located:

Now launch the launch file using: roslaunch launch_delta_video.launch

This will play the example bag file in paused mode; with the terminal window selected press space bar to start the movie.

Tracking

Quick start example
  1. copy the /examples/demo folder to your home directory as demo
    (eg: cp -r ~/catkin_ws/src/multi_tracker/examples/demo ~/)
  2. play the sample delta video according to instructions above
  3. create the empty directory ~/demo/demo_1/data
  4. from inside ~/demo/demo_1/src folder, run roslaunch tracking_launcher.launch
    This will load all the yaml (parameter) files, and launch the tracker, data_association, save_hdf5_data, delta_video, delta_video saving, and liveviewer nodes.
  5. After the flies have all moved from their original position, with the liveviewer window selected, press 'a'. This will create a new background image, effectively erasing the flies starting positions from the background.
  6. Hit control-c to stop the nodes (and cease collecting data).
  7. Rename the data directory
    (eg: mv ~/demo/demo_1/data ~/demo/demo_1/test_data)
Use your own camera

Set up your own camera to publish image topics to ROS. See, for example:

Point grey cameras
Notes: you may need to add a rule. For the firefly camera copy the rule multi_tracker/rules/99-point-grey-firefly.rules to /etc/udev/init.d, restart your computer, and replug in your camera. For other USB cameras, plug in the camera, run dmesg and note the idProduct and idVendor values for the camera, make the appropriate changes to multi_tracker/rules/99-point-grey-firefly.rules, and copy that new rule over to /etc/udev/init.d, restart, and replug your camera.

GigE cameras, e.g. Basler Ace

Check the image topic that your camera publishes to using rostopic list. If a image_mono topic is available, use that one. Edit the tracker_parameters.yaml file in ~/demo/demo_1/src to match that image topic. Now, with the camera topic running, and an empty data directory in ~/demo/demo_1 launch the tracking_launcher.launch file.

Data Analysis

Follow the instructions from the tracking quick start example (using the sample delta video, for example). Now you should have a data directory in ~/demo/demo_1/test_data with an hdf5 file (your tracked data), delta_video bag file, and background images for the delta video and tracking. Copy the configuration file multi_tracker/examples/sample_data/config_20160412_134708_N1.py to your data directory. Rename the file such that the data-time stamp matches the prefix of the hdf5 file.

Open the jupyter notebook, update the data path, and run each segment for simple examples: multi_tracker/examples/data_processing/example_data_processing.ipynb

Data visualization and editing GUI

Follow the Data Analysis steps. Then, from multi_tracker/multi_tracker_analysis, run: python ./trajectory_viewer_gui_v2.py --path=[DATA PATH]

The bottom panel shows the time history of the number of trajectories. Drag / resize the light blue box to select a region of time. Trajectories in that time region are displayed in the top panel. Move / zoom / change aspect of the top panel using the left mouse / scroll wheel / right mouse.

  • Load / Play movie: loads delta video bag and plays back the movie. May produce erratic behavior in the case of the included example because of colliding delta video topics. In original data it works fine.
  • Join Trajectories: Click "select trajecs" and select 2 or more trajectories (id's appear in the box, and selected trajectory is bold). Click "join trajecs". Instructions are saved to an instruction file - original data is never modified.
  • Cut Trajectories: Click "cut trajecs" and click on a a trajectory where you wish to clip it. The trajectory will get cut into 2.
  • Delete Trajectories: Click "delete trajecs" and click on a trajectory to delete it. Or: click "select trajecs" and select a trajectory, then click "delete trajecs".
  • Erratic behavior: click "clear selection / data"
  • Annotations: select a trajectory, add a text annotation to the box, click the check box, and click "save annotation". This saves the annotation to a pickle file for use in analysis.
  • Select ALL: enter min and max trajectory lengths (units: frames) you wish to select (-1 means infinite). Click "ALL". All trajectories within the current time window that match these criteria will be selected. You can now delete or join them. [only available in full size viewer, screen > 900 px tall]
  • Helpful tips

    I already have a movie, how can I track it?
    Use the ROS package video_stream_opencv.
    My movie has one object, and it moves very fast, why can't I track it well?
    Try increasing the following parameter in the data_association_parameters.yaml file: /multi_tracker/1/data_association/n_covariances_to_reject_data: 3 This number is multiplied by the covariance of each tracked object to give an acceptance radius. Any objects in the next frame that lie within this radius are considered viable. The nearest of these objects will be chosen for the update step. Larger values mean more continuous tracking, but a higher likelihood for eroneous tracking if there are multiple objects or noise. To smooth the tracking, adjust the coefficients for R and Q in the kalman parameters.
    My movie is playing, but the liveviewer does not work.
    Try changing the camera_encoding parameters in the liveviewer_parameters.yaml to match your camera (e.g. switch from mono8 to rgb).
    How can I batch-run multi-tracker on a bunch of videos?
    Use this script: https://github.com/eleanorlutz/multivideo_multitracker