Human vision is a sense that’s deeply complex. The way our bodies see and interact with the world is layered, nuanced and difficult to mimic in machines. Assistant Professor, Xu Chen, at the University of Washington believes that improving the integration of machine vision, intelligence and manipulation is the key to making robotic systems that are better at assisting humans with complex tasks.
“An ultimate goal in robotics is to build a research path to autonomy so that robots can help people do their jobs more effectively,” says Chen. “We want to develop systems that will work alongside humans and remove the parts of a job that are strenuous, unsafe or wasteful. Improving their vision and perception is an especially important part of that.”
Inspection Of Complex Metallic Parts
Over the past year, with support from the Advanced Robotics for Manufacturing (ARM) Institute, Chen was part of a team of engineers from the University of Washington, the University of Connecticut and GKN Aerospace who put their robots to the task of inspecting manufactured parts for imperfections. Specifically, they wanted a system that could spot some of the most difficult to discern defects – those in curved and complex metallic parts like ones used in airplane turbines.
A curved metal part is hard to study closely. It’s shiny and bent, so the lighting and combinations of angles and distances affect what can be seen and focused on.
“When working with airplane parts, the smallest defect can lead to a critical structural integrity issue, so the stakes are high,” says Alex Strzelecki, a software and automation engineer at GKN Aerospace who worked with Chen on the project. “Inspecting these parts is eye-straining and labor-intensive work for a person, but also a major challenge for a robotic system.”
The researchers needed their system to be fast and precise, but they also needed it to be flexible. It’s one thing to set up a robot inspector for a large-scale assembly process, like an automobile, but different for manufacturing processes with smaller runs of parts, like in aerospace or emerging 3D-printing technologies.
The team’s robot inspector was able to consistently spot defects 95% of the time, substantially better than most human inspectors. Thanks to the camera’s visual feedback, the robot can understand how it is doing on the fly, and automatically adapts its motion and even reconfigures its lighting to counteract differences in geometry and orientation of the parts along the curved, reflective surfaces. And because the process uses data, it can be tailored to quickly sort parts that may just need more machining or polishing versus those destined for the scrap heap.
“As humans we take for granted how intuitively our brain makes choices about prioritizing vision, perception and cognition, but those choices matter a lot,” notes Chen. “We already have so many types of robotic sensors and inputs to choose from and in five years we will have even more, so the options are almost infinite.”
For more information: https://www.macslab.xyz