In many industries, advocates of artificial intelligence and autonomous technology are quick to promise sweeping transformation and fully autonomous solutions. However, the optimists usually promise more than they can deliver and soon find the engineering challenges are greater than they first realised. In this article, Zohar Kantor, vice president of sales at artificial intelligence start-up Lean.AI, asks whether we celebrated the arrival of autonomous quality inspection too soon.
In 2013, Elon Musk said, “it’s a bridge too far to go to fully autonomous cars.” Although the world of driverless vehicles has moved on significantly since this admission, it was a belated recognition that the Tesla CEO had under-estimated the challenges of operating a vehicle without a human being in the driver’s seat.
Having spent a few years working in the field of machine vision, I can see many parallels. With the sudden influx of claims about artificial intelligence and autonomous solutions, quality managers were left with bold promises of fully autonomous quality inspection — the expectation of a system that can operate flawlessly without an operator guiding its setup at every step of the installation.
The Complexity Minefield
The challenge of introducing autonomous machine vision is best examined through the lens of a given use-case. Let’s return to the driving analogy. You can easily have a vehicle that drives, without a human driver in control, if the challenge is to slowly drive along a straight line in a closed environment. Autonomous construction vehicles that move material from one area of a quarry to another come to mind. If, however, you are trying to develop a passenger vehicle that can navigate a complex urban environment, the task is completely different. The same applies with machine vision technology. Whether or not we can easily automate something depends largely on the use case and the level of complexity this involves.
There are two areas where we might expect different levels of complexity, these are depicted in Figure 1. Along the y axis you have complexity of image capture. This refers to the difficulty of capturing an image, which is influenced by factors such as lighting profile, camera setup and requirements, and integration efforts. This turned out to be much trickier than many of us anticipated a few years ago. For some applications, where the complexity of capturing an image is low, a straightforward smart sensor and a simple camera with simple white lighting is enough. In other instances, where a special elimination profile other than white light or HDR camera is required, we might classify the use case as high on the scale of complexity of image acquisition.
The x axis represents the second major challenge, the complexity of defect inspection. Some defects are much harder to spot or categorise than others. For a simple use case, where the complexity of defect inspection can be classified as low, a defect might be detected by, for example, detecting presence or component polarity. In contrast, detecting a minor scratch on a metal surface after grinding is not an easy task and can be classified as highly complex in comparison. Another example of use cases where complexity of defect inspection is high might be where there is a need to detect each unique defect type (class) and applying special criteria, such as size or tolerance.
How Many Images Do You Need?
Autonomous quality inspection solutions that claim to be able to point-and-shoot with a camera and integrated software with just a small sample of around 50 good images are very ambitious. However, it might be achievable in use cases where complexity of acquisition and complexity of defect inspection is low. In a simple acquisition set up with consistent product texture and deterministic defects, this might be realistic.
However, unless your use case resides in the bottom left quadrant of the adjacent image, it is a different story. In cases where there are many surface nuances, highly complex defects with specific tolerances, in addition to ‘permissible defects’ which are accepted, the only way to train a model is by tagging huge amounts of data sets. Expecting an Autonomous Machine Vision to self-learn more sophisticated use cases is simply not going to work. Much like driverless cars in 2013, it’s a bridge too far.
Although there is no escaping the need for having lots of images for complex use cases, greater automation still holds the key to improve quality inspection. A hybrid approach is to use sophisticated AI models to automate the learning process and to allow the user to provide feedback when needed to guide the learning process in the right direction. The challenge is to minimize the amount of guidance, but not to eliminate it. The premise is that the user holds a high level of knowledge pertaining to the product target, using it to guide the learning process is therefore a win-win; automating the process while allowing the user to provide guidance where needed. That’s why LeanAI is focused on building a solution that significantly reduces the effort required to train and retain inspection systems.
For more information: www.lean-ai-tech.com