Deep Learning Sumo Robot
Hajimete! Let DL pushing madness begin!
Original article by Dmitry Maslov published on Hackster.io
Robot-sumo, or pepe-sumo, is a sport in which two robots attempt to push each other out of a circle (in a similar fashion to the sport of sumo). The robots used in this competition are called sumobots.
From Wikipedia, the free encyclopedia
Robot sumo is one of the classic and most popular robot competitions. There is a huge variety of builds used to accomplish the task of pushing the opponent out of the ring.
Many simpler robots rely on ultrasonic or infrared sensors to find the opponent before taking offensive action. In this article we will use M.A.R.K. mobile platform to make a variation of sumo bot.M.A.R.K. (I’ll call it MARK in text) stands for Make a Robot Kit and it is an educational robot platform in development by TinkerGen education. I take part in the development of MARK and we’re currently preparing to launch a Kickstarter campaign. In accompanying courses the students will learn how to complete a variety of tasks with MARK, i.e. self-driving, patrol, delivery service, etc. When I was writing course materials for High School M.A.R.K. course I wanted to include a challenge in the course that was both familiar to STEM teachers and students and at the same time had a unique twist. So I decided to design a robot sumo competition, where two MARK robots had to use DL models for detecting an opponent. This way two factors decide whose robot is more likely to win:
- the accuracy of trained model
- the algorithm of offensive action after detection is confirmed
We will use aXeleRate, Keras-based framework for AI on the Edge and the model training pipeline is very similar to what we had before with person detector. The only problem we have in this case is lack of suitable dataset – doesn’t matter if you want to do sumo competition with your own custom robot or MARK, there are no readily available datasets to download on the internet. Creating object detection dataset from scratch is a tedious task, usually at least 1000 pictures needed for one class to achieve acceptable results.Fortunately with MARK we can use a shortcut. MARK chassis is mostly black So we can write a simple OpenCV script that would detect biggest black blob in the image and draw a bounding box around it.
Then we would process all the images we taken with smartphone camera of the robot in at least 4-5 different environments. The annotation files we get from OpenCV script have some errors (especially if the environment is dark with a lot of shadows) and we will need to verify and correct them in labelImg annotation tool before using in training.
After the dataset is ready, let’s use this Colab notebook to train the model. I also share baseline trained model, which is included with the course for students to get the idea of what the normal performance of the model should be. The full Micropython code for MARK is in code section of this article. Have a look at final result videos – one in the head of the article and one below. Since it is a student competition, both model and robot code can (should ) be improved to get the edge over opponents.
Stay tuned for more articles from me and updates on MARK Kickstarter campaign.
For more information on Grove Zero series, Codecraft and other hardware for makers and STEM educators, visit our website, https://tinkergen.com/.