Early on it was recognized that the addition of perception could unleash the potential of industrial robots as truly flexible machines that can work in semi-structured environments, reducing the need for detailed teaching of every position a robot must target. This led to the introduction of Vision Guided Robotics (VGR), which works by finding the target object and telling the robot where the object is located.
Allowing the robot to work more efficiently in an unpredictable environment, VGR encompasses the various technologies used to recognize, process and handle parts and objects based on visual data captured via special equipment such as cameras, lenses, lighting and the necessary software required to provide reference points to the robot. While vision technologies are effectively bridging the gap to meet unique customer demands, it is important for manufacturers to understand the diversity of vision capabilities, making note of system application strengths and challenges before implementation.
2D Vision: 2D vision is well established technology for VGR. Early on, 2D vision deployment struggled due to limited software availability, sensitivity to lighting variation, and the high cost of camera hardware. However, significant improvements have been made with the offering of intelligent cameras with built-in structured lighting and plug and play integration with industrial robots. 2D vision works best for structured environments where objects are positioned or stacked in an organized, predictable pattern, so they can be easily imaged and picked.
3D Vision: 3D vision enables robot applications in semi-structured and random environments. An example of a semi-structured application is bin picking, where parts are positioned with some organization and predictability to help aid imaging and picking. Gaining in popularity, 3D bin picking systems include programming that has been made easy by quick inputting of CAD models of objects that need to be picked. However, cost of these systems is still high.
Unstructured environments where parts are in totally random positions, including different orientations, or are overlapping or entangled, is the next frontier for 3D vision. Such environments are common in the food industry (i.e., slicing of chickens), in logistics (i.e., parcel handling), in order-fulfillment (i.e., kitting and each picking). Effective solutions for random environment applications will require big-data and machine learning as the software will have to infer results from perception data that is incomplete. These applications also require automatic collision free robot path planning.
Inspection: Inspection systems look to judge the quality or fitment of parts, determining if the part meets the standard. An example of such an application in the food industry would be the inspection of pizza toppings. Sometimes, these systems are looked at as a “cost of doing business,” since their job is to discard parts that do not meet the standard. Simple inspection systems that measure part size, verify color, etc. have been prevalent and used with robots, as they are better suited than humans for doing quick, repetitive tasks with more objectivity.
Complex inspection that requires human judgment is still beyond robotic automation. However, with investment in big-data and machine learning, inspection will move up the value chain, and its integration with robotics will make it commercially viable.
Collaborative Automation: Collaborative robots enable the subdivision of a purely human labor task between a robot and a human. Steps that require human dexterity and judgment are performed by a human; whereas, steps that require precision and repeatable positioning are relegated to a robot. For collaborative automation to work, human safety is paramount. This is currently achieved by collaborative robots that can sense external force and stop if the force exceeds safety levels for human interaction.
"While vision technologies are effectively bridging the gap to meet unique customer demands, it is important for manufacturers to understand the diversity of vision capabilities, making note of system application strengths and challenges before implementation"
Beyond intrinsic force sensing that is built in the robot arm, there is progress being made in the development of 3D perception systems that can monitor humans and robot interaction and automatically slow down/stop the robot based on human-robot proximity. These systems are much more flexible than laser scanners, as they allow much closer human-robot interaction. Deployment cost and safety certification of such systems will be a significant challenge to their commercial success.
Conclusion: Vision technology is constantly being upgraded, especially where resolution and speed are concerned. While cameras will always have varying levels of resolution and performance, the main factor to consider is that the vision system used for application greatly depends on the size of the part and, if applicable, the bin size. Easy software tools and macros may also ease the development of some applications and improve performance. Understanding the diversity and capabilities of the technology available will facilitate an easier implementation process.