I've just entered the exciting world of robot competitions, having participated in regional events for robot fighting and sumo. I'm eager to take my skills to the next level in the upcoming league by incorporating computer vision into my robot design. I plan to use an external camera to monitor the arena and gather data about the robots in real-time.
My main question is whether it's feasible to identify the front and back sides of my opponent's robot during the competition. I'm also curious if I can implement a system to distinguish my robot (friend) from other bots (foes). I'm still a novice in computer vision, but I have a full year to learn as I go, and I want to code everything in C++. Any guidance would be awesome!
2 Answers
You can definitely use a camera for recognition in robot competitions! OpenCV is fantastic for this purpose, and while you can program in C++, I'd recommend considering Python. It's often easier for rapid development and has good machine learning capabilities. Now, regarding distinguishing the 'front' and 'back'—if your competition has specific markers, utilize those. Simple markers can make detection a breeze, and you can try algorithms like HOG or SIFT for robust feature recognition. If robots don't have recognizable features, things get trickier, and you might want to look into using machine learning for classification, though that requires a powerful system.
Since your camera can see both your robot and others, you'd need a unique identifier for yourself. By ‘friend-or-foe,’ I assume you want to filter out non-targets, right? That sounds like a smart approach!
Absolutely, it's possible with an overhead camera! A good way to figure out the front and back of the robots is to use unique markers, like colored patches or specific shapes. This way, by analyzing the positions, you can easily determine their orientation. For your friend-or-foe system, giving your robot a distinct color pattern or fiducial maker will help it identify itself. Start with basic color thresholding and contour detection in OpenCV—it's essential to learn the fundamentals first. Just be aware that lighting could affect your results, so you'll need to adjust those thresholds as you go.

I've been thinking about the computing needs too. You could run the processing on a PC and send the results to your robot through Wi-Fi to lighten the load on the onboard computer.