mobileye sensing status and road map
play

Mobileye Sensing Status and Road Map Dr. Gaby Hayon, EVP R&D 1 - PowerPoint PPT Presentation

November 2019 Mobileye Sensing Status and Road Map Dr. Gaby Hayon, EVP R&D 1 Confidential The Challenge of Sensing for the automotive market ME sensing has three demanding customers Sensing state for ME policy under Smart agent for


  1. November 2019 Mobileye Sensing Status and Road Map Dr. Gaby Hayon, EVP R&D 1 Confidential

  2. The Challenge of Sensing for the automotive market ME sensing has three demanding customers Sensing state for ME policy under Smart agent for harvesting, ADAS products working everywhere the strict role of independency and localization and dynamic and at all conditions on millions of redundancy. information for REM based map vehicles

  3. True redundancy Surround computer vision Radar/Lidar sub-system

  4. ME ’ s AD Perception ME ’ s AD Perception surround computer vision comprehensive env. model

  5. Comprehensive CV Environmental Model Full and unified surround coverage of all decision-relevant environment elements. These are generally grouped into 4 categories: Road Geometry (RG) All driving paths, explicitly/partially/implicitly indicated, their surface profile and surface type. Road Boundaries (RB) Any delimiter of the drivable area, it ’ s 3D structure and semantics. Both laterally delimiting elements(FS) and longitudinally (general objects/debris). Road Users (RU) 360 degrees detection and inter-camera tracking of any movable road-user, and actionable semantic-cues these users convey. (light indicators, gestures). Road Semantics (RS) Road-side directives (TFL/TSR) , on-road directives (text, arrows, stop-line , crosswalk) and their DP association.

  6. Robust CV Environmental Model Multiple independent visual-processing engines overlap in their coverage of the 4 categories (RG, RB, RU, RS) To satisfy extremely-low nominal failure frequencies of the CV-Sub-system Object detection DNNs RU Texture engine , example Lanes detection DNN RG Semantic Segmentation engine RB, RU ,RS RB, RU Single view Parallax-net elevation map RB, RU Multi-view Depth network RG Structure engine, example Generalized-HPP (VF) RU Wheels DNN Road Semantic Networks

  7. Actionable CV Environmental Model Support of different driving decisions & planning requires extraction of additional, essential set of contextual cues: ▪ Longitudinal and Lateral Driving plans / decisions • Overtake : Is the vehicle an obstacle? • Lane change : “ Give-way “ / “ take-way ” labeling of objects • Assessment of objects likely trajectories by the scene. ▪ VRU related drive planning Ped trajectory, intentions (head/body pose), relevance, vulnerability & host-path-access. ▪ Environmental limitations Cc cc visibility range , blockage, occlusions/view-range, road friction. ▪ Safe-stop possibility Is the road shoulder drivable? Is it safe to stop? ▪ Emergency/Enforcement response Emergency vehicles / personnel detection, Gesture recognition.

  8. Visual perception Environment Model Elements 8 Confidential

  9. Road Users

  10. Road Users 360 degrees detection and inter-camera tracking of any movable road-user, and actionable semantic-cues these users convey (light indicators, gestures) On top of the standalone Object detection networks running on all cameras, 2 Dedicated 360-stitching engines have been developed to assure completeness and coherency of the unified objects map: • Vehicle signature • Very close (part-of) vehicle in FOV : face & limits “ Full Image Detection ” - raw signal “ Full Image Detection ” output- short range precise detection

  11. Road Users

  12. Road Users

  13. Road Users Metric Physical Dimensions estimation dramatically improving measurements quality using novelty methods Dimension net output Temporal tracker

  14. Road Users Wheels- RU-part (relatively regular in shape) which we deliberately detect to affirm vehicle detections, 3D position, and tracking for high-function customers.

  15. Road Users ▪ The semantic segmentation is evident of all Road users, redundant to the dedicated networks ▪ It is also evident of extremely-small visible fragments of road users; These may potentially be used as scene-level contextual cues.

  16. Road Users – open door Open car door is uniquely classified , as it is both extremely common, critical and of no ground intersection

  17. Road Users - VRU Baby strollers and wheel chairs are detection through a dedicated engine on top of the highly matured pedestrians detection system

  18. Road Users - VRU Baby strollers and wheel chairs are detection through a dedicated engine on top of the highly matured pedestrians detection system

  19. Road Boundaries Surround-view stitched SR FS Occupancy Grid: ▪ Fusion of free space signal from 4 parking cameras, and front camera ▪ Main usages: a very accurate signal for handling crowded scenes, and a redundancy layer for objects detection, specifically general objects as containers, cones, carts, etc. ▪ Comparing the known scene (road edges and detected objects) with the occupancy grid. The differences are marked and reported as unknown objects.

  20. Road Users Road users semantics ▪ Head/pose orientation Pedestrians posture/gesture. ▪ Vehicle light indicators Emergency vehicle/Personnel classification. Emergency vehicle , light indicators Pedestrian understanding

  21. Road Users Pedestrian Gesture Understanding Come closer On the phone Stop! You can pass

  22. Road Users Dense Structure-based Object detection • Redundant to the appearance-based engines • Reinforce detection and measurements to support higher level of end-functions • E.g.- dealing with “ rear protruding ” objects – which hover above the objects ground intersection. Confidential

  23. Road Users How do we do this? DNN based multi-view stereo • Infers depth in "center" view using input from "center" and overlapping "surround" cameras • Flexibility in camera placement and orientation compared to canonical stereo-baseline camera pair setups • Covering blind-regions using e.g. parking camera in the front region 100° 100° • Learning based approach allows finding good object shape 100° priors, and prediction in texture-less regions • Angular resolution much higher than Lidar • Provides independent measurement and detection modality • Does not rely on manual labeling • Predicts per-pixel depth independent of Lidar

  24. Road Users Confidential

  25. Road Users DNN based multi-view stereo

  26. Road Users DNN based multi-view stereo

  27. Road Users Leveraging Lidar Processing Module for Stereo Camera Sensing – “ Pseudo-Lidar ” Dense depth image from stereo High-res Pseudo-Lidar Upright obstacle ‘ stick ’ Object detection cameras extraction

  28. Road Users View Range knowing that you don ’ t know • RSS safety envelope should not be violate even in areas with limited visibility • To ensure that, we must determine whether the reason for not detecting an object is because it doesn't exist or due to an occlusion • The solution- creating a 360 deg visibility envelope and measuring visibility range in all angles • Computation of information gathered from all cameras and the following features: - Free space and road edges - Vehicles and pedestrians detection - REM map and road elevation

  29. Road Users Policy-level applications Z axis view range placing "fake targets" in occluded areas that copping with occlusions deriving from intersect with ego's planned path, assuming road elevation plausible speed and trajectory Ghost Visible range target Visible range Occluded Occluded

  30. View range origin legend Main Front Narrow Front Front Right Front Left Rear Right Rear Left Rear

  31. Road Boundaries Detection of Any delimiter of the road surface- 3D structure and semantics. Both laterally delimiting elements(FS) and longitudinally (GO/debris) The Semantic segmentation engine provides a rich, high resolution pixel-level labeling; The SSN vocabulary is especially enriched to classify road delimiter types: ▪ Road Full Surface Segmentation Road/nRoad ▪ Elevated ▪ Cars ▪ Bike, Bicycle ▪ Ped ▪ CA obj ▪ Guardrail ▪ Concrete ▪ Curbs ▪ Flat ▪ Snow ▪ Parking in ▪ Parking out

  32. Road Ped Curb Edge General object Flat Car GuardRail Snow Bike Concret

  33. Road Ped Curb Edge General object Flat Car GuardRail Snow Bike Concret

  34. Surround Road/nRoad classification

  35. Road Boundaries Detection of Any delimiter of the road surface, it ’ s 3D structure and semantics. Both laterally delimiting elements(FS) and longitudinally (GO/debris) The Parallax Net engine provides an accurate understanding of structure by assessing residual elevation (flow) from the locally governing road surface (homography). It is therefore evident of extremely small objects and low-elevation lateral boundaries.

  36. Debris Detection Debris detection identifies structural deviations from road surface. Structure from Motion approach: geometry-based & appearance-invariant. detects any type of hazard.

  37. Road Geometry - Road3 in production Advanced lane applications (VW) Volkswagen Passat Travel Assist 2.0 with Mobileye camera https://www.youtube.com/watch?v=s7HCI33KVHA

Recommend


More recommend