{"id":58340,"date":"2020-08-05T14:58:38","date_gmt":"2020-08-05T05:58:38","guid":{"rendered":"https:\/\/smilegate.ai\/?p=58340"},"modified":"2020-08-14T11:23:06","modified_gmt":"2020-08-14T02:23:06","slug":"mit-driveseg","status":"publish","type":"post","link":"https:\/\/smilegate.ai\/en\/2020\/08\/05\/mit-driveseg\/","title":{"rendered":"MIT DriveSeg-data for road situation awareness research"},"content":{"rendered":"

It is a dataset DriveSeg created for research on road situation awareness (used for self-driving cars, etc.). For each frame of the video, the entire image is pixel-by-pixel semantic labeling. Labels are 12 categories of \u201cvehicle, pedestrian, road, sidewalk, bicycle, motorcycle, building, terrain (horizontal vegetation), vegetation (vertical vegetation), pole, traffic light, and traffic sign\u201d.<\/p>\n\n\n\n

The human-handed version of this process is provided at 5,000 frames, 1080p@30Hz, and the semi-auto version at 20,000 frames, 720p@30Hz. Even if it is not an autonomous vehicle, it will be useful in segmentation vision research. (I plan to use it for separating a specific object from an image, replacing the background, and improving the performance of the segmentation algorithm)<\/p>\n\n\n\n

<\/div>\n\n\n