Object Tracking allows us to identify the objects and locate objects in the image or a video. Object tracking can detect multiple objects in an image or video. In object tracking, algorithms can detect the object and assign an id and track them on the test file.
Object tracking makes use of detecting the object and comparing the object in a current frame with the previous frames and tracks their movement by making drawings on the frames such as bounding boxes, rectangles, circles, etc.,
The demand for Object Tracking is pretty high in the field of deep learning as there are many real-time applications that make use of these models. Different applications of object tracking are,
- Surveillance and Security
- Traffic checking
- Automated Vehicle Systems
- Producing Heat maps of the tracking object
- Real-time ball tracking in Sports
- Activity Recoginition
- Counting the crowd
To implement Object tracking we use two types of algorithms. They are:-
- Kernel Correlation Filters algorithm ( KCF )
- Discriminative Correlation Filter with Channel and Spacial Reliability ( CSRT )
Amongst all the tracking methods available KCF and CSRT are the most accurate considering all the pros and cons. KCF is very fast when it comes to processing the video while the CSRT is a bit slow but the tracking of the object is precise.
To load the Tracking functions into your program you can either use pycharm or anaconda IDE, also these functions do not come in-built for the version below OpenCV 3.4. To install those modules you can run
python -m pip install opencv-contrib-python in your command line.
Kernel Correlation Filters algorithm(KCF) object Tracking
KCF object tracking is very fast and easy to implement the algorithm. When the movement speed of objects is very high KCF will not be able to track the object precisely.
This tracker works on comparing the object movement in the present frame to the older frame and creates overlapping data which results in some great mathematical results.
KCF is based on several layered filtering processes where it creates a bounding box around the object that needs to be tracked and keeps a track record of the particles inside the bounding box in the upcoming frames.
KCF has a high tracking rate if there is no obstacle between the camera and the region of interest. However, if the obstacle covers the target area as shown in figure 1, it loses the object and tracking an erroneous area. The reason for this problem is that KCF’s response map deletes other areas except for the tracking area and constructs the map.
KCF set candidates of objects to be tracked in the map and predicts the object. If obstacles cover the map, candidate objects are changed and tracking other objects.
To load the KCF tracker we can use
import cv2 tracker = cv2.TrackerKCF_create()
To select the tracking object draw a bounding box along the borders of the image using
video = cv2.VideoCapture('C:\race.mp4') ret, frame = video.read() #selecting the region of interest and printing the co-ordinates of the box bbox = cv2.selectROI(frame) print("Co-ordinates of the object in frame_1:- "bbox)
Determining the ROI
Co-ordinates of the object in frame_1:- (662, 245, 82, 201)
Once the Tracking object is selected you can update the box of the object on the source file where the object needs to be tracked using
ret, bbox = tracker.update(frame)
As the frame in the source are kept analyzing for the object each time the object location is tracked we can draw a bounding box such as a rectangle using
import cv2 tracker = cv2.TrackerKCF_create() video = cv2.VideoCapture('C:\race.mp4') ret, frame = video.read() bbox = cv2.selectROI(frame) print(bbox) ret = tracker.init(frame, bbox) while True: ret, frame = video.read() if not ret: break ret, bbox = tracker.update(frame) if ret: (x, y, w, h) = [int(v) for v in bbox] cv2.rectangle(frame, (x,y), (x+w, y+h), (0, 255, 0), 2, 1) cv2.imshow("tracking", frame) if cv2.waitKey(1) & 0XFF == ord('q'): break video.release() cv2.destroyAllWindows()
Output:- Here are few shots of object being tracked on a video file.
Discriminative Correlation Filter with Channel and Spacial Reliability(CSRT) Object Tracking
CSRT object tracking is a little bit slow and also complex when compared with KCF in the fields of ease of implementation and processes involved in it. If we compare the tracking accuracy of KCF and CSRT it was found out that CSRT tracks the objects much precisely even though the frame rate was very high.
CSRT tracks the object using the following principles:-
- From left to right:- training patch with the bounding box of the object
- HOG to extract useful information of the object
- Application of Random Markov Test to generate probabilities
- Training patch masked using the Confidence map
For implementing the CSRT Object tracking we need to load the tracker using
cv2.TrackerCSRT_create(), select the object and draw the bounding boxes around the object, the tracking part remains the same as implemented in the KCF tracking.
import cv2 tracker = cv2.TrackerCSRT_create()
import cv2 tracker = cv2.TrackerCSRT_create() video = cv2.VideoCapture('C:\race.mp4') ret, frame = video.read() bbox = cv2.selectROI(frame) print(bbox) ret = tracker.init(frame, bbox) while True: ret, frame = video.read() if not ret: break ret, bbox = tracker.update(frame) if ret: (x, y, w, h) = [int(v) for v in bbox] cv2.rectangle(frame, (x,y), (x+w, y+h), (0, 255, 0), 2, 1) cv2.imshow("tracking", frame) if cv2.waitKey(1) & 0XFF == ord('q'): break video.release() cv2.destroyAllWindows()
Selecting object of interest
Output:- Here in CSRT even though we have selected the small area as our object it tracks the object precisely without fail till the end of the video, but if we select the same small object in KCF the algorithm will not be able to track it since the frame rate is high for it.