Scene Recognition for Safety Analysis in Collaborative Robotics

Detta är en Master-uppsats från KTH/Skolan för elektroteknik och datavetenskap (EECS)

Sammanfattning: In modern industrial environments, human-robot collaboration is a trend in automation to improve performance and productivity. Instead of isolating robot from human to guarantee safety, collaborative robotics allows human and robot working in the same area at the same time. New hazards and risks, such as the collision between robot and human, arise in this situation. Safety analysis is necessary to protect both human and robot when using a collaborative robot.To perform safety analysis, robots need to perceive the surrounding environment in realtime. This surrounding environment is perceived and stored in the form of scene graph, which is a direct graph with semantic representation of the environment, the relationship between the detected objects and properties of these objects. In order to generate the scene graph, a simulated warehouse is used: robots and humans work in a common area for transferring products between shelves and conveyor belts. Each robot generates its own scene graph from the attached camera sensor. In the graph, each detected object is represented by a node and edges are used to denote the relationship among the identified objects. The graph node includes values like velocity, bounding box sizes, orientation, distance and directions between the object and the robot.We generate scene graph in a simulated warehouse scenario with the frequency of 7 Hz and present a study of Mask R-CNN based on the qualitative comparison. Mask R-CNN is a method for object instance segmentation to get the properties of the objects. It uses ResNetFPN for feature extraction and adds a branch to Faster R-CNN for predicting segmentation mask for each object. And its results outperform almost all existing, single-model entries on instance segmentation and bounding-box object detection. With the help of this method, the boundaries of the detected object are extracted from the camera images. We initialize Mask R-CNN model using three different types of weights: COCO pre-trained weight, ImageNet pre-trained weight and random weight, and the results of these three different weights are compared w.r.t. precision and recall.Results showed that Mask R-CNN is also suitable for simulated environments and can meet requirements in both detection precision and speed. Moreover, the model trained used the COCO pre-trained weight outperformed the model with ImageNet and randomly assigned initial weights. The calculated Mean Average Precision (mAP) value for validation dataset reaches 0.949 with COCO pre-trained weights and execution speed of 11.35 fps.

  HÄR KAN DU HÄMTA UPPSATSEN I FULLTEXT. (följ länken till nästa sida)