Tracking Motion without Neural Networks: Optical Flow

Vipul Vaibhaw
3 min readApr 12, 2020

--

Object tracking or understanding motion is one of the key problems in computer vision.

There are numerous object tracking algorithms available these days. These algorithm performs really well. They are state-of-the-art.

Let’s list down some interesting algorithms —

  • DeepSORT
  • ROLO
  • Re3

DeepSORT is one of the finest object tracking algorithm. However, there are some assumptions in DeepSORT, for example, there should be no ego-motion. Ego-motion in simple words mean that camera should be stationary.

Also, DeepSORT tracks only person. For any other object we would need to train it again.

Errgh! Do we have a simple way to understand and even track simple motion? A method which can work with ego-motion?

Lucas–Kanade method for Optical Flow

Optical Flow — The actual or observed(relative motion) between objects and observer(camera) is known as optical flow. If the camera is moving and the object is stationary then also we will stay that we have optical flow.

Coming back to LK optical flow method. Optical Flow can be calculated by comparing two consecutive frames from the video.

Lucas Kanade Method is based on something known as Brightness constancy assumption. The key idea here is that pixel level brightness won’t change a lot in just one frame. Another assumption which LK method makes is that the motion of the pixel values inside an object will be similar.

Hence we would need to give LK method some key-points on which further computation can happen.

Often people use, Shi-Tomasi corner detection to get key points of corners of objects and pass it to LK method.

# Let's import essential modules
import numpy as np
import cv2
# Initialise the video we want to work on
cap = cv2.VideoCapture('test_vid.mp4')

Now we will set some params for corner detection

feature_params = dict( maxCorners = 100,
qualityLevel = 0.1,
minDistance = 10,
blockSize = 10 )

Some params for lk optical flow as well

lk_params = dict( winSize  = (10,10),
maxLevel = 1,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03))
# Create some random colors
color = np.random.randint(0,255,(100,3))
# Take first frame and find corners in itret, old_frame = cap.read()
old_gray = cv2.cvtColor(old_frame, cv2.COLOR_BGR2GRAY)
p0 = cv2.goodFeaturesToTrack(old_gray, mask = None, **feature_params)
# Create a mask image for drawing purposes
mask = np.zeros_like(old_frame)

Now we will start applying lk optical flow on video

while(cap.isOpened()):
ret,frame = cap.read()
frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# calculate optical flow
p1, st, err = cv2.calcOpticalFlowPyrLK(old_gray, frame_gray, p0, None, **lk_params)
# Select good points
good_new = p1[st==1]
good_old = p0[st==1]
# draw the tracks
for i,(new,old) in enumerate(zip(good_new,good_old)):
a,b = new.ravel()
c,d = old.ravel()
mask = cv2.line(mask, (a,b),(c,d), color[i].tolist(), 2)
frame = cv2.circle(frame,(a,b),5,color[i].tolist(),-1)
img = cv2.add(frame,mask)
cv2.imshow('frame',img)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
# Now update the previous frame and previous points
old_gray = frame_gray.copy()
p0 = good_new.reshape(-1,1,2)
cv2.destroyAllWindows()
cap.release()

Here is an example output —

As we can see that the person is not moving but we(the camera) are moving forward. Hence the flow is backwards. This method can also track the jerks which happens because of rough surface.

We need to note that Lucas Kanade has a drawback that it doesn’t perform well with rapid motion. Hence the output above is not that decent.

A better example can be found here —

How to improve Lucas Kanade? We can used Pyramids(topic of another blog). We can combine LK method with Kalman Filters as well.

--

--

Vipul Vaibhaw
Vipul Vaibhaw

Written by Vipul Vaibhaw

I am passionate about computer engineering. Building scalable distributed systems| Web3 | Data Engineering | Contact me — vaibhaw[dot]vipul[at]gmail[dot]com

No responses yet