People in various industries are increasingly interested in using robotic quality control solutions. They recognize that doing so could help them save money, reduce defects and enhance outcomes. Many of these automated options include 3D vision systems. They allow machines to “see” objects and other aspects of the environment and spot when things are amiss. Here’s a closer look at how this technology can take quality control to the next level.
Making Metalworking Inspections More Streamlined
One of the primary reasons that decision-makers pursue robotic systems is to improve current processes. That can make a company more resilient and able to respond to challenges. Saccade Vision and partner Euclid Labs recently announced the sale of its robotic 3D vision system to a multinational conglomerate associated with industrial engineering and steel production. Implementing the technology in that company will bring numerous improvements to inspection processes.
Saccade Vision’s system, which can check all machined parts after processing, features a microelectromechanical system (MEMS) laser illumination module as a light source. It scans the metal in all directions, but people can optimize the scan direction to avoid turning the part during quality control checks.
Saccade Vision’s approach also uses 100 million three-dimensional points when gathering data about a component. However, it works without needing a large data set by continually focusing on the most meaningful aspects of the part being inspected.
Moreover, the technology can change its resolution by scanning some sections of a component at low resolution but switching to a high-resolution scan across specific areas. That could be useful if companies have had particular quality issues and must reduce them. This system also uses a single robot for loading, unloading and inspecting the parts, allowing manufacturing leaders to minimize their overall robotics investment without compromising results.
When discussing the technology, Roberto Polesel, CEO of Euclid Labs, said there was an initial focus on using the system to inspect sheet metal flanges and angles after bending. Another goal was to verify the correct assembly processes for multiple items. Polesel explained that a major part of adopting software-defined manufacturing was eliminating previous processes that required inspections with various tools to collect single measurements. Robotic 3D vision systems go a long way in enabling such improvements.
Enabling Better Quality for Robotic Welding Systems
Welding processes frequently cause defects that negatively impact overall quality, strength and performance. However, some companies now use computer vision systems for improved welding outcomes. Some solutions even feature automatic welding path corrections, which greatly reduce the need for human intervention.
One machine-vision solution used by John Deere analyzes welds in progress and stops the process after discovering a defect. This approach ensures people catch flaws early and can take prompt corrective action.
Welders can also use 3D scanners to automatically calibrate robotic welding systems. Setting a monthly or quarterly recalibration interval can help ensure that these systems remain accurate. It’s also a best practice to calibrate equipment before starting new projects. A 3D scanner could streamline this process and keep operations running like a well oiled machine.
One automated welding option with a 3D scanner calibrates the system in approximately 90 seconds. Researchers from the University of Ljubljana and Yaskawa Slovenija demonstrated its effectiveness by welding thick pipes, cast-machined hollow components and thin sheet metal. They concluded that automated calibration was an important feature for enabling the robot to perform self-monitoring for better quality control.
Enhancing Automotive Assembly Lines
Car assembly is a massive undertaking, occurring in huge factories and involving numerous teams of trained professionals. 3D vision systems mounted onto robots can make it easier.
Consider the example of an automated assembly line deployed at Geely Automobile Holdings in Ningbo, China. The setup can create permanent magnet motor rotors, gearboxes and electronics drives, and also automate quality control and testing. This solution doubles the plant’s automation capabilities to 80%. Moreover, it includes a 3D vision system to put components in the correct position and guide them along the right path during assembly.
General Motors uses a 3D robotic scanning system at one of its electric vehicle factories, which is a significant part of quality control. It measures the car’s body and all frame dimensions to verify they meet specifications. Workers then receive a color-coded map showing which components fall outside those parameters. This part of the process took six hours before the company started using robotics to complete it. It now gets done in two.
In another case, Adient, a global manufacturer of interior car parts and vehicle seats, sought to improve the accuracy of foam trimming. It worked with a robotics specialist to develop a system with a pair of six-axis robots.
One robot took 3D images of a foam part and generated a highly accurate 360-degree model. The next one in the assembly line trimmed and reworked the piece based on what the imagery showed. It also operated a grinder to remove rough areas on the foam surface. Together, these robots can handle 120 foam pieces per hour.
Offering Progressively Better Performance
Some 3D vision systems improve as people use them for longer periods. That’s often because they have machine learning technology that automatically adjusts based on accumulated data.
For example, the Solomon Solmotion product, intended for industrial manufacturing, can automatically recognize a product’s position and correct the path of robotic machines accordingly. This feature means the associated robots can operate without fixtures and react to environmental changes in real-time. Onboard machine learning technology also identifies an object’s attributes and potential defects.
A noncommercialized example of what’s possible came from researchers at the University of Žilina. The team made calibration adjustments rather than relying on machine learning to get the process improvements. They developed a bolt-tightening robot with a built-in computer vision system capable of seeing 3D objects.
When the researchers realized the robotic system initially failed to identify some objects, they altered the parameters of elasticity, area overlap, and contrast and score thresholds to fix the problem. They also recognized performance differences based on the distance between the screw head and camera lens.
The researchers ran the robot in a production environment for approximately two years. Due to their lab experiments and real-world applications, the team got the machine to make only six attempts per day that did not pass quality control checks. That number represented less than 0.1% of all screwing operations.
How Will You Use Robotic 3D Vision Systems?
These applications show plenty of potential for using 3D vision systems on industrial robots. The results are often impressive, whether someone does so in a laboratory or develops something scalable enough for commercialization.
However, decision-makers must take the time to understand how machine vision works and what advantages it could bring if used along with robots. Having clear objectives and goals before implementation usually makes the outcomes more meaningful.
Author: Emily Newton is the editor-in-chief of Revolutionized, an online magazine exploring the innovations disrupting the scientific and industrial sectors. She has over six years of experience covering industrial stories.