The Diversity of Vision Guided Robotics
manufacturingtechnologyinsights

The Diversity of Vision Guided Robotics

By Chetan Kapoor, Senior Director of Technology Innovation, Yaskawa America, Inc., Motoman Robotics Division

Industrial robotics has been an established means for manufacturing automation since the 1970’s. Today, there are over 2M operational industrial robots, and by 2020 this number is expected to reach 3M. Most robots deployed to date have been in structured environments in handling applications and in process applications such as welding and painting.

Early on it was recognized that the addition of perception could unleash the potential of industrial robots as truly flexible machines that can work in semi-structured environments, reducing the need for detailed teaching of every position a robot must target. This led to the introduction of Vision Guided Robotics (VGR), which works by finding the target object and telling the robot where the object is located.

Allowing the robot to work more efficiently in an unpredictable environment, VGR encompasses the various technologies used to recognize, process and handle parts and objects based on visual data captured via special equipment such as cameras, lenses, lighting and the necessary software required to provide reference points to the robot. While vision technologies are effectively bridging the gap to meet unique customer demands, it is important for manufacturers to understand the diversity of vision capabilities, making note of system application strengths and challenges before implementation.

2D Vision: 2D vision is well established technology for VGR. Early on, 2D vision deployment struggled due to limited software availability, sensitivity to lighting variation, and the high cost of camera hardware. However, significant improvements have been made with the offering of intelligent cameras with built-in structured lighting and plug and play integration with industrial robots. 2D vision works best for structured environments where objects are positioned or stacked in an organized, predictable pattern, so they can be easily imaged and picked.

3D Vision: 3D vision enables robot applications in semi-structured and random environments. An example of a semi-structured application is bin picking, where parts are positioned with some organization and predictability to help aid imaging and picking. Gaining in popularity, 3D bin picking systems include programming that has been made easy by quick inputting of CAD models of objects that need to be picked. However, cost of these systems is still high.

Unstructured environments where parts are in totally random positions, including different orientations, or are overlapping or entangled, is the next frontier for 3D vision. Such environments are common in the food industry (i.e., slicing of chickens), in logistics (i.e., parcel handling), in order-fulfillment (i.e., kitting and each picking). Effective solutions for random environment applications will require big-data and machine learning as the software will have to infer results from perception data that is incomplete. These applications also require automatic collision free robot path planning.

Inspection: Inspection systems look to judge the quality or fitment of parts, determining if the part meets the standard. An example of such an application in the food industry would be the inspection of pizza toppings. Sometimes, these systems are looked at as a “cost of doing business,” since their job is to discard parts that do not meet the standard. Simple inspection systems that measure part size, verify color, etc. have been prevalent and used with robots, as they are better suited than humans for doing quick, repetitive tasks with more objectivity.

Complex inspection that requires human judgment is still beyond robotic automation. However, with investment in big-data and machine learning, inspection will move up the value chain, and its integration with robotics will make it commercially viable.

Collaborative Automation: Collaborative robots enable the subdivision of a purely human labor task between a robot and a human. Steps that require human dexterity and judgment are performed by a human; whereas, steps that require precision and repeatable positioning are relegated to a robot. For collaborative automation to work, human safety is paramount. This is currently achieved by collaborative robots that can sense external force and stop if the force exceeds safety levels for human interaction.

"While vision technologies are effectively bridging the gap to meet unique customer demands, it is important for manufacturers to understand the diversity of vision capabilities, making note of system application strengths and challenges before implementation"

Beyond intrinsic force sensing that is built in the robot arm, there is progress being made in the development of 3D perception systems that can monitor humans and robot interaction and automatically slow down/stop the robot based on human-robot proximity. These systems are much more flexible than laser scanners, as they allow much closer human-robot interaction. Deployment cost and safety certification of such systems will be a significant challenge to their commercial success.

Conclusion: Vision technology is constantly being upgraded, especially where resolution and speed are concerned. While cameras will always have varying levels of resolution and performance, the main factor to consider is that the vision system used for application greatly depends on the size of the part and, if applicable, the bin size. Easy software tools and macros may also ease the development of some applications and improve performance. Understanding the diversity and capabilities of the technology available will facilitate an easier implementation process.

Weekly Brief

Read Also

Insights into the Latest Innovations in Electronics Manufacturing

Insights into the Latest Innovations in Electronics Manufacturing

Keith James, Director of Strategic Supply Chain & MFG Ops, Crestron Electronics Inc.
Manufacturing Industry Yesterday, Today, and Tomorrow

Manufacturing Industry Yesterday, Today, and Tomorrow

Stephen Katsirubas, Chief Information Officer, Hunter Douglas [AMS:HDG]
The ICON A5: A Small Airplane with Large Scale Automotive-Level Manufacturing

The ICON A5: A Small Airplane with Large Scale Automotive-Level Manufacturing

Veronica Rubio, VP of Manufacturing, ICON Aircraft
New Space Production

New Space Production

Fabio Achille Occhioni, Production & Integration Director, Thales Alenia Space
The Path to Managing Data as an Asset

The Path to Managing Data as an Asset

Lenin Gali, VP, Data Engineering, Quotient Technology [NYSE:QUOT]