Course notes: Drone/Python

Preamble

Notes from Drone Programming With Python Course | 3 Hours | Including x4 Projects (2021).

Github: Object tracking, Face tracking

Use of the Python module djitellopy with the Ryze Tello.

See also

Tello

Courses

Related courses – videos

Demo video

Path Visualizer – Drone Programming Course Demo

Coding

Courses

Links

Ryze Tello video reviews

Using djitellopy with other drones?

Unfortunately, djitelloppy doesn’t seem portable, it doesn’t even seem to apply to other DJI models. However, it is possible to use Python with the ArduPilot:


Notes from Tello video course

Types of drone

  • Quadcopter
  • Hexacopter
  • Octocopter

Components of a drone

  • Frame
    • Carbon Fiber
    • Wood
    • 3D printer
  • Motor
    • Brushed – simple, cheap
    • Brushless – power/weight ratio, expensive
  • Propellors
    • Generate lift
    • Clockwise/counter clockwise – to stop drone rotating.
    • Multiple blades. More blade, more thrust generator, but less efficient.
  • ESC
    • Speed of motor
    • DC signal to AC motor
    • Single or board with multiple ESCs
  • PDB
    • Distributes power from battery to power all components including ESC and FC
  • Flight controller
    • Determines speed of motors, from sensors and TXer
  • Battery
    • Light and efficient for max flight time
  • Receiver
    • AKA bluedot (bluetooth?), signals between remote and drone
  • Camera
    • SD card or antennas
  • VTX
    • For video to FPB remote or cellphone
  • Sensors
    • Pressure – altitude
    • GPU – position
    • IMU – acceleration and angles
  • Transmitter

How a drone flies

Time: 6’0”

  • 4 degree of freedom – 3 translations and 1 rotate
  • 2 prop clockwise, 2 counter clock

Movement

  • up – increase motor speed
  • down – decrease motor speed
  • left – slow right motors, increase left motors
  • right – slow left motors, increase right motors
  • back – slow front motors, increase rear motors
  • forwards – slow rear motors, increase front motors
  • rotate – to rotate clockwise, slow the clockwise propellors and increase the counter clockwise propellers and vice versa

Drone used in course

  • Tello drone $85-135 (now $99-149) for the Regular and Jumbo packages. DJI/Intel technology.
  • Brushed motors
  • No GPS

Python

3.7.1 (works with OpenCV)

brew install --build-from-source python@3.7

This installed v3.7, 3.8 and 3.9

See How do I “replay” the “Caveats” section from a homebrew recipe

IDE

PyCharm – Community version

Basic movements

Time: 19’30”

DJITelloPy – Python wrapper for the DJITello SDK

There is also DJITelloPy2

New Project – Tello Course, delete main.py file

Right click the project, select New Python File – BasicMovements (no need to add .py, it is added automatically by the IDE).

Install djitellopy library (package), File>settings>project>Project interpreter> Add>DJITello>click djitellopy>Install

Note: As there was no menu item File>Settings (when either project or python file was selected), I had to File>NewProject Settings>Preferences for new project>Project Interpreter>Select TelloCourse and then the Add became available.

Install opencv-python library (package)

Hit the OK button.

Code

Right Click Project> New Python file: BasicMovements (no need to add .py)

Note: Ctrl-click any function name shows functions available in the documentation

# BasicMovements.py

from djitellopy import tello
from time import sleep

drone = tello.TELLO()
drone.connect()

print(drone.get_battery())

# Too simple
#drone.move_forward(30)
# Control motors
drone.takeoff()
drone.send_rc_control(0, 50, 0, 0)
sleep(2)
drone.send_rc_control(0, 0, 50, 0) 
drone.land()

Connect PC to Tello Wi-Fi

Right click> run BasicMovements.py

rc commands will be used, to control the motors, rather than using the simple move commands.

Image capture

  • Time: 31’26”
  • Capture image and process image
  • Right Click Project> New Python file: ImageCapture (no need to add .py)
# ImageCapture.py

from djitellopy import tello
import cv2 
drone = tello.TELLO() 
drone.connect()

print(drone.get_battery()) 

drone.streamon()
while True:
    image=drone.get_frame_read().frame
    image=cv2.resize(image,(360,240))
    cv2.imshow("Image",image)
    cv2.waitKey(1)

Right click and Run ImageCapture

Keyboard control

  • Time: 36’28”
  • Creating a Python module.
  • Right click project and new Python file: KeyPressModule (no need to add .py)
  • Uses pygame package
# KeyPressModule.py

import pygame

def init():
    pygame.init()
    window = pygame.display.set_mode((400,400))

def getKey(keyName):
    answer = False
    for event in pygame.event.get():pass
    keyInput = pygame.key.get_pressed()
    myKey = getattr(pygame, 'K_{}'.format(keyName))
    if keyInput[myKey]:
        answer=True
    pygame.display.update()
    return answer

def main():
    print(getKey("a"))
    if getKey("LEFT"):
        print ("Left key pressed")
    if getKey("RIGHT"):
        print ("Right key pressed")

# If running module as main file
if __name__ == '__main__':
    init()
    while True:
        main()

Right click project> New python file: KeyboardControlTest (no need to add .py)

# KeyboardControlTest.py

import KeyPressModule as kp

kp.init()

while True:
    print(kp.getKey("s"))

Right click project> New python file: KeyboardControl (no need to add .py)

# KeyboardControl.py

from djitellopy import tello 
import KeyPressModule as kp
from time import sleep
kp.init()
drone = tello.TELLO() 
drone.connect()

print(drone.get_battery()) 

drone.takeoff()

def getKeyboardInput():
    lr, fb, ud, yv = 0, 0, 0, 0
    speed = 50
    if kp.getKey("LEFT"):
        ir = -(speed)
    if kp.getKey("RIGHT"):
        ir = speed
    if kp.getKey("DOWN"):
        fb = -(speed)
    if kp.getKey("UP"):
        fb = speed
    if kp.getKey("s"):
        ud = -(speed)
    if kp.getKey("w"):
        ud = speed
    if kp.getKey("d"):
        yv = -(speed)
    if kp.getKey("a"):
        yv = speed
    if kp.getKey("q"):
        drone.land()
        sleep(3)
    if kp.getKey("e"):
        drone.takeoff()

    return(lr, fb, ud, yv)

while True:
    rc = geyKeyboardInput()
    drone.send_rc_control(rc[0], rc[1], rc[2], rc[3]) 
    sleep (0.05)

Done!

Note that sleep(3) is added after land(), which isn’t shown in the video.

Project 1 – Surveillance Drone Project

  • Time: 59’17”
  • Right click the Tello Course project>New Python file: Project-KeyboardControlImageCapture (no need to add .py).
  • Combining the ImageCapture and the KeyControl scripts:
# KeyboardControlImageCapture.py

from djitellopy import tello 
import cv2 
import KeyPressModule as kp
import time

global image

kp.init()
drone = tello.TELLO() 
drone.connect()

print(drone.get_battery()) 

drone.takeoff()

def getKeyboardInput():
    lr, fb, ud, yv = 0, 0, 0, 0
    speed = 50
    if kp.getKey("LEFT"):
        ir = -(speed)
    if kp.getKey("RIGHT"):
        ir = speed
    if kp.getKey("DOWN"):
        fb = -(speed)
    if kp.getKey("UP"):
        fb = speed
    if kp.getKey("s"):
        ud = -(speed)
    if kp.getKey("w"):
        ud = speed
    if kp.getKey("d"):
        yv = speed
    if kp.getKey("a"):
        yv = -speed
    if kp.getKey("q"):
# Video has this line
#       yv = drome.land()
# No need to assign to yv
        drone.land()
        time.sleep(3)
    if kp.getKey("e"):
        drone.takeoff()
# Note single quotes now used for key
    if kp.getKey('z'):
        cv2.imwrite(f'/Resources/Images/{time.time()}.jpg', image)
        time.sleep(0.3) # To stop multiple images being saved

    return(lr, fb, ud, yv)

drone.streamon()

while True:
    rc = geyKeyboardInput()
    drone.send_rc_control(rc[0], rc[1], rc[2], rc[3])
 
    image=drone.get_frame_read().frame 
    image=cv2.resize(image,(360,240)) 
    cv2.imshow("Image",image) 
    cv2.waitKey(1)

Done!

Note: Not shown in video, the a key is now -speed and d is now (+)speed, i.e. the opposite of the KeyboardControl.py script.

Creating directories

Before running, manually create the Resources/Images/ directory.

Or, to achieve this programmatically, this answer to Creating a file in a non-existing folder using OpenCV in Python,

import os
dirname1 = 'Resources'
dirname2 = 'Images'
os.mkdir(dirname1)
os.mkdir(dirname1/dirname2)

It would be good to check whether the folder exists or not (this answer to mkdir -p functionality in Python [duplicate]):

import pathlib
pathlib.Path("/tmp/path/to/desired/directory").mkdir(parents=True, exist_ok=True)

or from this answer (for Python version >= 3.2)

os.makedirs(path/to/directory, exist_ok=True)

So

os.makedirs(Resources/Images, exist_ok=True)

See also this answer to How can I safely create a nested directory?

The code as given in the video may fail to write the image on non-Windows platforms. This answer to OpenCV – Saving images to a particular folder of choice allows writing to work on all platforms:

import cv2
import os
img = cv2.imread('1.jpg', 1)
path = 'D:/OpenCV/Scripts/Images'
cv2.imwrite(os.path.join(path , 'waka.jpg'),img)
cv2.waitKey(0)

Done!

Or https://thispointer.com/how-to-create-a-directory-in-python

if not os.path.exists(dirName):
    os.makedirs(dirName)
    print("Directory " , dirName ,  " Created ")
else:    
    print("Directory " , dirName ,  " already exists")

 

Project 2 – Mapping Project

Time: 1:10:36

Odometry: distance (speed) and angle to X,Y co-ords.

Right click project and new python file: Mapping (no need to add .py).

Copy KeyboardControl.py

# Mapping.py

from djitellopy import tello 
import KeyPressModule as kp
import numpy as np
from time import sleep
import cv2
import math

####### PARAMETERS ######

fSpeed = 117/10 # forward speed in cm/s (15 cm/s)
aSpeed = 360/10 # angular speed °/s
interval = 0.25

dInterval = fSpeed * interval # Distance traveled in a certain time
aInterval = aSpeed * interval # Angle rotated in a certain time

#########################

x, y = 500, 500
a = 0
yaw = 0

points = [(0,0)] 
#points = [(0,0) , (0,0)] # not needed to have two values
kp.init()
drone = tello.TELLO() 
drone.connect()

print(drone.get_battery()) 

drone.takeoff()

def drawPoints(image, points):
#    cv2.circle(image, (points[0], points[1]), 5, (0, 0, 255), cv2.FILLED) # BGR colour, not RGB
    for point in points:
#        cv2.circle(image, (point[0], point[1]), 5, (0, 0, 255), cv2.FILLED)
        cv2.circle(image, point, 5, (0, 0, 255), cv2.FILLED)
    cv2.circle(image, points[-1], 8, (0, 255, ), cv2.FILLED) 
    cv2.putText(image, f'({(points[-1][0] - 500)/100}, {(points[-1][1] - 500)/100})', (points[-1][0] +10, points[-1][1] +30), cv2.FONT_HERSHEY_PLAIN, 1, (255,0,255), 1 )

def getKeyboardInput():
    lr, fb, ud, yv = 0, 0, 0, 0
    speed = 15
    aSpeed = 50
    d = 0
    global x, y, yaw, a

    if kp.getKey("LEFT"):
        ir = -(speed)
        d = dInterval
        a = -180
    if kp.getKey("RIGHT"):
        ir = speed
        d = -dInterval
        a = 180
    if kp.getKey("DOWN"):
        fb = -(speed)
        d = dInterval
        a = 270
    if kp.getKey("UP"):
        fb = speed
        d = -dInterval
        a = -90
    if kp.getKey("s"):
        ud = -(speed)
    if kp.getKey("w"):
        ud = speed
    if kp.getKey("d"):
        yv = aSpeed
        yaw += aInterval
    if kp.getKey("a"):
        yv = -aSpeed
        yaw -= aInterval
    if kp.getKey("q"):
        drone.land()
        sleep(3)
    if kp.getKey("e"):
        drone.takeoff()

    sleep(interval)
    a += yaw
    x += int(d+math.cos(math.radians(a)))
    y += int(d+math.sin(math.radians(a)))
    return lr, fb, ud, yv, x, y

while True:
    rc = getKeyboardInput()
    drone.send_rc_control(rc[0], rc[1], rc[2], rc[3]) 
#    points = (rc[4], rc[5])
#    points.append(rc[4], rc[5])
    if points[-1][0] != rc[4] or points[-1][1] != rc[5]:
        points.append((rc[4], rc[5]))
    image = np.zeros ((1000, 1000, 3), np.uint8)
    drawPoints(image, points)
    cv2.imshow("Custom Path Visua", image)

Done!

  • Note: why have -90, 270, -180 and 180?
  • Note: why not declare just one global yaw, instead of both yaw and global yaw?
  • Why have global x and y when they are passed as parameters?
  • Note: Not shown in video, the a key is now -speed and d is now (+)speed, i.e. the opposite of the KeyboardControl.py script. This change was first noted in the Surveillance script, KeyboardControlImageCapture.py.
  • Why change speed, in getKeyboardInput(),  to 15? speed was initially (in KeyboardControl.py) the speed of the motors at 50%, not distance travelled.

Project 3 – Face Tracking

Time: 1:52:17

For tracking, two parameters are used:

  • Distance: area (in pixels) camera is filled, determines whether move forward or back (two red and one green zone).
  • Yaw: Proportionally reduce angular speed as the object gets nearer the center of the camera, otherwise, overshoot and oscillation will occur, due to the drone not stopping immediately (think of turning off an oscillating fan)

Right click project and new Python file: Tracking (no need to add .py).

# Tracking.py

import cv2
import numpy as np
from djitellopy import tello
import time

drone = tello.TELLO() 
drone.connect()

print(drone.get_battery()) 

drone.streamon()
drone.takeoff()
drone.send_rc_control(0, 0, 20, 0)
time.sleep(2.5) 

w, h = 360, 240
fbRange = [6200, 6800]
pid = [0.4, 0.4, 0]
pError = 0

def findFace(image):
    faceCascade = cv2.CascadeClassifier("Resources/haarcascade_frontalface_default.xml")
    imageGrey = cv2.cvtColor(image, cv2.COLOR_BGR2GREY)
    faces = faceCascade.detectMultiscale(imageGrey, 1.2, 8)

    myFaceListCenter = []
    myFaceListArea = []

    for (x,y, w,h) in faces:
        cv2.rectangle((x,y), {x+w, y+h}, (0,0,255), 2)
        cx = (x+w)/2
        cy = (y+h)/2
        area = w * h
        cv2.circle(image, (cx,cy), (0,255,0), 2, cv2.FILLED)
        myFaceListCenter.append(cx,cy)
        myFaceListArea.append(area)
    if len(myFaceListArea) != 0:
        index = myFaceListArea.index(max(myFaceListArea))
        return image, [myFaceListCenter[i], myFaceListArea[i]]
    else:
        return image,[[0,0],0]

def trackFace(drone, info, w, pid, pError):
    area = info[1]
    x, y = info[0]
    error = x-w//2
    fb = 0

    aSpeed = pid[0] * error + pid[1]*(error-pError)
    aSpeed = int(np.clip(speed,-100,100))
    if area > fbRange[0] and area < fbRange[1]:
        fb = 0
    elif area>fbRange[1]:
        fb=-20
    elif area<fbRange[0] and area != 0:
        fb=20

    if x == 0:
        aSpeed = 0
        error = 0

    print (fb, aSpeed, error)

    drone.send_rc_control(0, fb, 0, aSpeed)
    return error
# For (non-drone) test only
#camera = cv2.VideoCapture(0)
while True:
# For (non-drone) test only
#    _, image = camera.read
    image=drone.get_frame_read().frame 

    image = cv2.resize(image, (w, h))
    image, info = findFace(image)
    pError = trackFace(drone, info, w, pid, pError) 
    print ("Area", info[1])
    print("Center",info[0])
    print ("Area", info[1], "Center", info[0]) 
    cv2.imshow("output", image)
    if cv2.waitKey(1) & 0xFF == ord("q"):
        drone.land()
        break

The haarcascade file: haarcascade_frontalface_default.xml

Project 4 – Line Follower

Time: 2:32:22

A line following robot car has three sensors, left, center and right. 8 entries in truth table

  1. 0 0 0   stop
  2. 0 0 1   right
  3. 0 1 0   straight
  4. 0 1 1   slight right
  5. 1 0 0   left
  6. 1 0 1   stop
  7. 1 1 0   slight left
  8. 1 1 1   stop

Time: 2:38:45 – for the line following drone

Mirror clips

divide image into 3 (or 5)

Unlike the robot, there is the problem of translation, maybe drone is displaced away from the line can not see line. We will try to find the line and keep it in the center.

Right click project and new Python file: LineFollower (no need to add .py).

#LineFollower.py

import cv2
import numpy as np
from djitellopy import tello 

drone = tello.TELLO() 
drone.connect() 

print(drone.get_battery()) 

drone.streamon()

drone.takeoff()

cp = cv2.VideoCapture(0)
# After running ColourPicker.py we have some values
hsvValues = [0,0, 117, 179, 22, 219]
# resizeWidth MUST be divisible by the number of sensors
resizeWidth, resizeHeight = 480, 360
numberOfSensors = 3
threshold = 0.2
# The higher the less sensitive
sensitivity =3 
#Weights => sensors:[100,110,010,011,001]
weights = [-25,-15,0,15,25]
# No need for global
# curve = 0
# start off slow (15)
fSpeed = 15

def thresholding(image):
    hsv = cv2.cvtColour(image, cv2.COLOR_BGR2HSV)
    lowerWhite = np.array([hsvValues[0], hsvValues[1], hsvValues[2]])
    upperWhite = np.array([hsvValues[3], hsvValues[4], hsvValues[5]])
    mask = cv2.inRange(hsv, lowerWhite, upperWhite)
    return mask

def getContours(imageThreshold, image):
    cx = 0
    contours, hierarchy = cv2.findContours(imageThreshold, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
    cv2.drawContours(image, contours, -1, (255,0,255), 7)
# Add if to prevent div by zero crash
    if len(contours) != 0:
        biggestContour = max(contours, key = cv2.contourArea)
        x,y,w,h = cv2.boundingRect(biggestContour)
        cx = x+w//2
        cy = y+h//2
        cv2.circle (image, (cx,cy),10, (0,255,0), cv2.FILLED)
#can also draw a bounding rectangle, if you want
#       cv2.Rectangle(image,(x,y,w,h),10, (0,255,0), cv2.FILLED)
    return cx

def getSensorOutput(image, numberOfSensors):
    images = np.hsplit(imageThreshold, numberOfSensors)
    totalPixels = image.shape[1]//numberOfSensors*image.shape[0] # width, height == shape [1], shape[0] respectively
    sensorOutput = []
    for x,imagePortion in enumerate(images):
        pixelCount = cv2.countNonZero(imagePortion)
        if pixelCount > threshold*totalPixels:
            sensorOutput.append(1)
        else:
            sensorOutput.append(0)
        cv2.imshow(str(x), imagePortion)
    print sensorOutput
    return sensorOutput

def sendCommands(sensorOutput, cx):
    curve = 0
# Translation
    lr = (cx - resizeWidth//2)//sensitivity
    lr = int(np.clip(lr,-10,10))

    if lr < 2 and lr > 2:
        lr = 0
# Rotation
     if   sensorOutput == (1,0,0):
#        curve = 30
        curve = weights[0]
     elif sensorOutput == (1,1,0):
        curve = weights[1]
     elif sensorOutput == (0,1,0):
        curve = weights[2]
     elif sensorOutput == (0,1,1):
        curve = weights[3]
     elif sensorOutput == (0,0,1):
        curve = weights[4]

     elif sensorOutput == (1,1,1):
        curve = weights[2]
     elif sensorOutput == (0,0,0):
        curve = weights[2]
     elif sensorOutput == (1,0,1):
        curve = weights[2]
    
# Normal operation
    drone.send_rc_control(lr, fSpeed, 0, curve)
# Rotation operation - test only
#    drone.send_rc_control(0, fSpeed, 0, curve)
# Translation operation - test only
#    drone.send_rc_control(lr, fSpeed, 0, 0)

while True:
# For testing
#    _, image = cp.read()
    image=drone.get_frame_read().frame 
    image = cv2.resize(image, (resizeWidth, resizeHeight))
# Comment out for testing (without mirror)
    image = cv2.flip(image,0)


    imageThreshold = thresholding (image)
    cx = getContours(imageThreshold, image) # For translation
    sensorOutput = getSensorOutput(image, numberOfSensors) # For rotation
    sendCommands(sensorOutput, cx)
    cv2.imShow("Output", image)
    cv2.imShow("Image Path", imageThreshold) 
    cv2.waitKey(1)

ColourPicker

Time: 2:48:35

Tuning script for HSV settings in LineFollower, Right click project and new Python file: ColourPicker (no need to add .py):

# ColourPicker.py

from djitellopy import tello
import cv2
import numpy as np

frameWidth = 480
frameHeight = 360

drone = tello.TELLO() 
drone.connect() 
print(drone.get_battery()) 
drone.streamon()


def empty(a):
    pass

cv2.namedWindow("HSV")
cv2.resizeWindow("HSV", 640, 240)


cv2.createTrackbar("HUE Min", "HSV", 0, 179, empty)
cv2.createTrackbar("HUE Max", "HSV", 179, 179, empty)
cv2.createTrackbar("SAT Min", "HSV", 0, 255, empty)
cv2.createTrackbar("SAT Max", "HSV", 255, 255, empty)
cv2.createTrackbar("VALUE Min", "HSV", 0, 255, empty)
cv2.createTrackbar("VALUE Max", "HSV", 255, 255, empty)

# For testing
#cp = cv2.VideoCapture(0)

frameCounter = 0

while True:
    image = drone.get_frame_read().frame

# For testing
#    _, image = cp.read() 

    image = cv2.resize(image, (frameWidth, frameHeight))
# Comment out for testing (without mirror) 
    image = cv2.flip(image, 0)
    imageHsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) 

    h_min = cv2.getTrackbarPos("HUE Min", "HSV")
    h_max = cv2.getTrackbarPos("HUE Max", "HSV")
    s_min = cv2.getTrackbarPos("SAT Min", "HSV")
    s_max = cv2.getTrackbarPos("SAT Max", "HSV")
    v_min = cv2.getTrackbarPos("VALUE Min", "HSV")
    v_max = cv2.getTrackbarPos("VALUE Max", "HSV")

    lower = np.array([h_min], [s_min], [v_min])
    upper = np.array([h_max], [s_max], [v_max])
    mask = cv2.inRange(imageHsv, lower, upper)
    result = cv2.bitwise_and(image, image, mask=mask)
    print(f'[{h_min}, {s_min}, {v_min}, {h_max}, {s_max}, {v_max}]')

    mask = cv2.cvtColor(mask, cv2.COLOR_GRAAY2BGR) 
    hStack = np.hstack([image, mask, result])
    cv2.imshow("Horizontal Stacking", hStack)
    if cv2.waitKey(1) and 0xFF == ord('q'):
        break

cp.release()
cv2.destroyAllWindows()

Use this script to get the appropriate values of HSV. When background is changed, i.e. carpet, grass, etc. then you have to “re-tune” the (drone) camera using ColourPicker.py again.

Process

    1. Convert image to HSV
    2. Find contours
    3. Convert image to a box and find the center.
    4. Divide image by number of sensors
    5. Determine the number of “on” pixels, i.e. white pixels
    6. If greater than the threshold (20%), then sensor is “on”, or 1.
    7. :Then move accordingly, using
      1. weighted movement array for rotation, and;
      2. deviation from center for translation (plus sensitivity modifier)

Finished!

To Do

  1. Replace small green center dot with a big red dot
  2. Add translation deviation indicator (green line in video – code not shown)
  3. Add rotation deviation indicator (not shown in video)
# Add in sendCommands()
thick = 3
# Line for translation
cv2.line (image, (cx,cy), (cx+(cx - resizeWidth//2)//sensitivity, cy), (255, 255, 128), thick)
cv2.line (image, (cx,cy+thick), (cx+(cx - resizeWidth//2), cy+thick), (255, 128, 255), thick)
cv2.line (image, (cx,cy-thick), (cx+lr, cy-thick), (255, 128, 128), thick)
# Arc for rotation
draw_rotation_arc(image)

Arc drawing

def draw_rotation_arc(image):
    exaggerate = 2  # Exaggeration factor
    # Ellipse parameters
    radius = 30  # Slightly more than circle forming the dot
    center = (cx, cy)
    axes = (radius, radius)
    angle = 0
    startAngle = 270
    thickness = 10
    endAngle = startAngle + (curve * exaggerate)
 
    # http://docs.opencv.org/modules/core/doc/drawing_functions.html#ellipse
    cv2.ellipse(image, center, axes, angle, startAngle, endAngle, BLUE, thickness)

Drawing options:

  • Modify start and end of arc according to weight (i.e start arc at 180 to rotate right or 360 to rotate left), or (better still);
  • Start at 90° and draw arc clock or counter-clock wise, depending upon sign of curve, and length of arc depends upon weight (i.e. value of curve).

Notes

 

 

End of Course

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s