The Ratchet Rockers win awards year after year for our innovative vision solutions designed to take full advantage of autonomous mode. We publish our code as open source after every season, available on GitHub: https://github.com/rr1706.
During the 2014 robotics season, the team used an array of three modified IR cameras, mounted in a triangular layout for the goal of 360-degree situational awareness of the field and surroundings. Computers used include the ODROID-XU, a 32-bit ARM Cortex-A7 processor with 4 GB of RAM, which was mounted on the robot. This computer ran a specifically developed vision algorithm in real time, triangulating the position of the robot from information acquired from the retro-reflective targets at the corners of the field. The targets redirected IR light from an LED array back into the camera. The team achieved usable results, at 5 solutions per second.
During the 2015 robotics season, the team again used an ODROID-XU, but substituted the basic camera array with a single ASUS Xtion Pro Live camera, situated under the elevator. This specific camera provided RGB color, IR, and depth map images from one device. The robot used the depth map imagery (an image where every pixel, instead of representing a color, represents the distance calculated to that point) to calculate height of tote stacks as well as to find the position of the totes relative to the center of the robot, for easy retrieval.
The team won Innovation in Control at the St. Louis regional in 2015, specifically for the vision algorithm and its integration with control.
After worlds in 2015, NVIDIA gifted the team two Jetson TK-1 development boards, in recognition of our achievement and potential. The team plans to use one of these devices aboard the robot in 2016, due to the increased computing power offered by the CUDA cores which vision algorithms can easily take advantage of due to their intention as graphics processing units.