aXeleRate – Keras-Based Framework for AI on the Edge

Train your machine learning models in Google Colab and easily optimize them for hardware accelerated inference!

Original article by Dmitry Maslov published on Hackster.io

These might be very difficult times for many of us, depending on the part of the world you live in. Due to aggravating coronovirus pandemic many countries implemented strict lockdown policies. I myself recently had to spend 14 days in quarantine, staying indoors for 24 hours a day. I decided to make most of it and continue working on the stuff I am excited about, i.e. robotics and machine learning. And this is how aXeleRate was born.

aXeleRate started as a personal project of mine for training YOLOv2 based object detection networks and exporting them to.kmodel format to be run on K210 chip. I also needed to train image classification networks. And sometimes I needed to run inference with Tensorflow Lite on Raspberry Pi. As a result I had a whole bunch of disconnected scripts with somewhat overlapping functionality. So, I decided to fix that by combining all the elements in an easy to use package and as a bonus part – making it fully compatible with Google Colab.

aXeleRate is meant for people who need to run computer vision applications (image classification, object detection, semantic segmentation) on the edge devices with hardware acceleration. It has easy configuration process through config file or config dictionary (for Google Colab) and automatic conversion of the best model for training session into the required file format. You put the properly formatted data in, start the training script and come back to see a converted model that is ready for deployment on your device!

Here is quick rundown of the features:

Key Features

  • Supports multiple computer vision models: object detection(YOLOv2), image classification, semantic segmentation(SegNet-basic)
  • Different feature extractors to be used with the above network types: Full Yolo, Tiny Yolo, MobileNet, SqueezeNet, VGG16, ResNet50, and Inception3.
  • Automatic conversion of the best model for the training session. aXeleRate will download the suitable converter automatically.
  • Currently supports trained model conversion to:.kmodel(K210),.tflite formats. Support planned for:.tflite(Edge TPU),.pb(TF-TRT optimized).
  • Model version control made easier. Keras model files and converted models are saved in the project folder, grouped by the training date. Training history is saved as.png graph in the model folder.
  • Two modes of operation: locally, with train.py script and.json config file and remote, tailored for Google Colab, with module import and dictionary config.

In this article we’re going to train a person detection model for use with K210 chip on cyberEye board installed on M.A.R.K. mobile platform. M.A.R.K. (I’ll call it MARK in text) stands for Make a Robot Kit and it is an educational robot platform in development by TinkerGen education. I take part in the development of MARK and we’re currently preparing to launch a Kickstarter campaign. One of the main features of MARK is making machine learning concepts and workflow more transparent and easier to understand and use for teachers and students.

As it was mentioned before, aXeleRate can be run on local computer or in Google Colab. We’ll opt for running on Google Colab, since it simplifies the preparation step.

Let’s open the sample notebook

Go through the cells one by one to get the understanding of the workflow. This example trains a detection network on a tiny dataset that is included with aXeleRate. For our next step we need a bigger dataset to actually train a useful model.

Open the notebook I prepared. Follow the steps there and in the end, after a few hours of training you will get.h5,.tflite and.kmodel files saved in your Google Drive. Download the.kmodel file and copy it to an SD card and insert the SD card into mainboard. In our case with M.A.R.K. it is a modified version of Maixduino called cyberEye.

MARK is an educational robot for introducing students to concepts of AI and machine learning. So, there are two ways to run a custom model you created just now: using Micropython code or our TinkerGen’s graphical programming environment, called Codecraft. While the first one is undoubtedly more flexible in ways you can tweak the inference parameters, the second is more user-friendly.

If you opt for graphical programming environment, then go to Codecraft website, https://ide.tinkergen.com and choose MARK(cyberEye) as target platform.

Click on Add Extension and choose Custom models, then click on Object Detection model.

There you will need to enter filename of the model on SD card, the actual name of the model you will see in Codecraft interface(can be anything, let’s enter Person detection), category name(person) and anchors. We didn’t change anchors parameters, so will just use the default ones.

After that you will see three new blocks have appeared. Let’s choose the one that outputs X coordinate of detected object and connect it to Display… at row 1. Put that inside of the loop and upload the code to MARK. You should see X coordinate of the center of the bounding box around detected person at the first row of the screen. If nothing is detected it will show -1.

This allowed us to implement model inference in graphical programming environment. Now we’ll go to Micropython and implement more advanced solution. Download and install MaixPy IDE from here.

Open the example code I enclose with the article. The code logic is following:

1) We check if there are people detected in find_center() function. If there are people found, it returns the x-coordinate of the center of the biggest detected bounding box. If no people detected, the function would return -1.

2) If find_center() function returns the x-coordinate, we check if it is closer to image center/on the left/on the right, then control the motors accordingly.

3) If find_center() function returns -1, we use servo to do 40 degree tilt scan for people with the camera.

4) If while tilt scan we are unable to find people, the robot does 2 180 degree pan scans.

5) Finally if pan scan doesn’t detect any people, robot starts rotating in place in clockwise direction, while still performing person detection.

Here is the final result of Micropython code in action! It could be improved still to make faster/more robust.

aXeleRate is still work in progress project. I will be making some changes from time to time and if you find it useful and can contribute, PRs are very much welcome! In near future I will be making another video about inference on Raspberry Pi 4 with/without hardware acceleration. We will have another WIP hardware appearance for that one. Which one? Can’t tell you, hush-hush!

Stay tuned for more articles from me and updates on MARK Kickstarter campaign.

For more information on Grove Zero series, Codecraft and other hardware for makers and STEM educators, visit our website, https://tinkergen.com/.

Until the next time and stay safe from the coronavirus!

Code section

import sensor,image,lcd, os, time, utime
from maix_motor import Maix_motor
import KPU as kpu

def find_center():
    img = sensor.snapshot()
    img = img.resize(224,224)
    img= img.rotation_corr(z_rotation=90.0)
    a = img.pix_to_ai()
    code = kpu.run_yolo2(task, img)
    if code:
        for i in code:
            a=img.draw_rectangle(i.rect(),color = (0, 255, 0))
            a = img.draw_string(i.x(),i.y(), classes[i.classid()], color=(255,0,0), scale=3)
            x_center = i.x()+i.w()/2
            print(x_center)
        a = lcd.display(img)
        return x_center
    else:
        a = lcd.display(img)
        return -1


def scan_pan():
    Maix_motor.motor_run(0, 0, 0)
    Maix_motor.servo_angle(2, 110)
    for i in range(0, 180, 5):
        pos = find_center()
        if pos >= 100 and pos <= 124:
            Maix_motor.servo_angle(1, 90)
            if i >= 90:
                Maix_motor.motor_right(40, 0)
                Maix_motor.motor_left(-40, 0)
                utime.sleep_ms(((i - 90) * 12))
            if i < 90:
                Maix_motor.motor_right(-40, 0)
                Maix_motor.motor_left(40, 0)
                utime.sleep_ms(((90 - i) * 12))
            Maix_motor.motor_motion(1, 1, 0)
            time.sleep(0.3)
            return True
        else:
            Maix_motor.servo_angle(1, i)


def scan_tilt():
    #Maix_motor.motor_run(0, 0, 0)
    Maix_motor.servo_angle(1, 90)
    for i in range(100, 140, 2):
        pos = find_center()
        if pos > 0:
            return True
        else:
            Maix_motor.servo_angle(2, i)

def scan_rotate():
    Maix_motor.servo_angle(1, 90)
    Maix_motor.servo_angle(2, 110)
    Maix_motor.motor_motion(1, 3, 0)
    while True:
        pos = find_center()
        if pos > 0:
            break

lcd.init()
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_vflip(1)
sensor.run(1)
Maix_motor.servo_angle(1, 90)
Maix_motor.servo_angle(2, 110)
DEBUG = 0
classes = ["person"]
task = kpu.load('/sd/model_people_end.kmodel')
anchor = (0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828)
a = kpu.init_yolo2(task, 0.3, 0.3, 5, anchor)
while(True):
    x_center = find_center()
    if not DEBUG:
        if x_center >= 100 and x_center <= 124: Maix_motor.motor_motion(1, 1, 0)
        if x_center < 100 and x_center > 0: Maix_motor.motor_motion(1, 4, 0)
        if x_center > 124: Maix_motor.motor_motion(1, 3, 0)
        if x_center < 0:
            if not scan_tilt():
                scan_pan()
                if not scan_pan():
                    scan_rotate()
a = kpu.deinit(task)

About Author

Calendar

April 2020
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
27282930