Vision-guided robotics (VGR) is fast becoming an enabling technology for the automation of many different processes within many different industries. Object recognition technology is the ability to identify different items, based on their three-dimensional geometry. Whether the process involves loose, mixed, irregular parts, sacks or bags, equipping a robot with a 3D Area Sensor provides an efficient solution that can be quickly adapted to handle different products.
Typical applications for 3D Area Sensors include; de-palletising materials (including mixed boxes, sacks, bags and food packaging, bin picking – loose random parts, irregular items, and irregularly-shaped sacks or packaging, and sorting, placing and loading picked items into machines. Automating such processes is an effective way of increasing productivity and reducing costs on a vast range of general material handling applications. Even setups involving dirty, dusty or rusty products and/or difficult light conditions can benefit from such efficiencies. A range of grippers is available to handle a vast range of materials, based on mechanical, magnetic or vacuum principles.
The FANUC 3D Area Sensor is available in two versions: the 3DA-400 for smaller applications (400 x 300 x 300 mm); and the 3DA-1300 for larger applications (1340 x 1000 x 1000 mm). They are easy to install – no interfaces to external devices or a PC are necessary – and setup wizards are available to reduce system commissioning time.
FANUC‘s 3D Area Sensor uses structured light projection to create 3D maps of its surroundings. By adding structured light to a vision system, features of the part are not located by identifying features via multiple cameras, but rather through each camera locating the object’s features created by the structured light. Typically, this is performed over a large area, with many 3D points detected for each snap. The result is a point cloud, which compiles several X-Y-Z locations in a tight array across the 3D scene being imaged.
Using these maps, the system looks for parts. The part manager then does an evaluation and decides which part to pick. Taking reaching distance and collision avoidance into account, it then chooses the fastest picking option. If the part manager decides a pick has been unsuccessful or a part queue does not contain a part to pick, another image is taken and the process starts again using the new results. 3D vision systems that generate point clouds are very useful for VGR applications, because multiple parts can be located simultaneously. Multi-tasking background processing – part detection takes place while the robot is moving and does not interrupt the workflow- means that shorter cycle times can be achieved.
Teaching the robot new paths is easy. The 3D Area Sensor can be programmed on the shop floor using a graphical interface on iPendant Touch, which utilises FANUC’s familiar iRVision graphical interface. It can set up a bin picking application in a matter of minutes.
One robot can service up to four 3D area sensors. In bin picking applications, the 3DA/1300 Area Sensor can be top-mounted on an auxiliary axis-powered rail, allowing the robot to directly control the movement of the sensor. Mounting the sensor this way gives the robot two bins in which to work from. As soon as the robot recognizes that one bin is empty it will automatically switch to the next one, saving downtime that would be required for an operator to change the bin manually.
The 3D Area Sensor can be used in cleanroom-certified applications in electronics and pharmaceutical applications, as well as for food handling. A cleanroom robot equipped with the iRVision 3DA/400 Area Sensor can locate and pick randomly orientated bottle caps from a bin. The 3DA/400 Area Sensor provides 3D location of the bottle caps in the bin. The robot picks the bottle caps from the bin and places them in a second bin at high speeds. iRVision’s Interference Avoidance feature prevents the robot and tooling from coming in contact with the bin walls.
For more information: www.fanuc.eu