<img alt="" src="https://secure.365smartenterprising.com/789934.png" style="display:none;">
4 min read

When Do I Need AI in My Machine Vision System?

As artificial intelligence (AI) becomes more engrained in both our personal and professional lives, it’s reasonable to assume AI is a must-have for any machine vision system too. However, we want to make it clear – AI-based machine vision is not replacing traditional vision systems. There are actually quite a few machine vision tasks where the addition of AI would not offer any further benefits, such as those that involve clearly defined parameters, highly uniform parts, or reading basic barcodes.  

AI MV
 

So, how do you know when a machine vision application could benefit from the use of AI? In short, an AI-based machine vision system is best for performing complex visual tasks where traditional rules-based systems are proving to be insufficient. But, before jumping to the conclusion that AI is the only solution, you need to make sure you have a solid vision setup first. This includes optimizing factors such as lighting, part positioning, inspections angles, and system training (you can learn more about performing these tasks in this white paper). 

If you’ve confirmed your setup is optimal for the tasks you need to perform and you are still not seeing your desired results, then it’s likely that bringing AI into your system is the best option. Let’s look at some of the common conditions where AI is likely necessary to achieve optimal results. 

Common Conditions for Incorporating AI in Your Machine Vision System

In general, AI-based machine vision is an excellent option for performing tasks that involve high variability and complex pattern recognition. This is because AI is not rules-based. AI-based systems can be trained using large datasets of images that represent characteristics such as many variations of a good or bad part and what defects look like. Below are four examples of scenarios where an AI-based machine vision system can provide much more accurate results versus a traditional vision system.

Inspecting Stamped Metal

Stamping a code for traceability purposes onto a piece of metal is necessary for a wide variety of parts, especially those used throughout the automotive, aerospace and defense, and oil and gas industries. If you are trying to inspect these codes using optical character recognition (OCR) on a traditional machine vision system, you may run into issues due to the high variability of the stamped metal codes. Character edges are often not clearly defined and the stamping itself may be inconsistent as the stamping tool wears or stamping pressure varies. An AI-based system can be trained to handle this variability in stamping quality and appearance to much more accurately read distorted, partially stamped, or degraded codes. 

Looking at Spherical and Cylindrical Surfaces

When working with a spherical or cylindrical surface, visual distortion, where the area of interest on a part may appear skewed, elongated, or compressed, can often occur. Distortion makes it difficult for a traditional vision system to inspect these surfaces for defects or to read a label or code wrapped around the object. AI can be trained to adapt to distortion caused by a curved surface to identify even subtle defects or read codes that may be distorted due to the curved surface. 

Examining Shiny Surfaces

Beyond stamped metal, there are many use cases where machine vision is needed to inspect shiny parts ranging from reflective pouches that hold food products to objects with a metallic coating to shiny plastic. Some of these shiny surfaces even have an added challenge in that the surface may not be a single plane as the packaging may have some rumpling in it. Even under the best conditions with properly constrained lighting where you’ve worked to eliminate hot spots, traditional vision systems still may not be able to successfully inspect these surfaces consistently as a defect can be disguised. AI-based machine vision can be trained to look for subtle changes on a shiny surface, even when there is high variability from part to part. 

Moving and Inspecting Translucent Parts

Translucent parts are inherently challenging to work with since light can pass through the parts, leading to variations in interpretation by the vision system between parts. Whether you need to look for defects in a translucent part such as a vial, or perform automated pick and place, these tasks can be difficult using traditional vision systems.

As noted in one of our previous posts, the importance of good lighting cannot be overemphasized, especially when working with translucent parts. Often, a color other than bright white – red, green, or even blue, depending on the substrate – will make that part definition really pop for detection by the vision system. Combined with the proper lighting, AI-based vision systems can be trained to recognize parts that are distorted by partial transparency, or parts that are overlapping or touching another translucent part, allowing for individual parts to be correctly isolated. Additionally, AI systems can be trained to distinguish the difference between surface variability inherent to translucency and actual defects such as cracks or bubbles. 

When Nothing Else is Working, AI May Be the Answer

Even if your application is not addressed above, if you feel like your lightning is optimized and you’ve done everything you can to program your rules-based vision system, yet it’s still not working as expected, you may need to incorporate AI in your system. Start out by talking through your scenario with a knowledgeable integrator such as ACE so we can first analyze the setup of your current system and determine if AI is the right choice. Then, we can work together to define and train your system to meet your needs. 

Learn how ACE can partner with your organization to meet your robotics and machine vision needs.