adas computer vision and augmented reality solution
play

ADAS COMPUTER VISION AND AUGMENTED REALITY SOLUTION Sergii Bykov, - PowerPoint PPT Presentation

ENGINEERING ENERGY TELECOM SOFTWARE TRAVEL AND AVIATION FINANCIAL SERVICES ADAS COMPUTER VISION AND AUGMENTED REALITY SOLUTION Sergii Bykov, Technical Lead TECHNOLOGY www.luxoft.com AUTOMOTIVE Product Vision www.luxoft.com Road To


  1. ENGINEERING ENERGY TELECOM SOFTWARE TRAVEL AND AVIATION FINANCIAL SERVICES ADAS COMPUTER VISION AND AUGMENTED REALITY SOLUTION Sergii Bykov, Technical Lead TECHNOLOGY www.luxoft.com AUTOMOTIVE

  2. Product Vision www.luxoft.com

  3. Road To Autonomous Driving Source: Intel www.luxoft.com

  4. Representation For The Driver Output is an extendable metadata which describes all the augmented objects/hints and supports natural features ON/OFF www.luxoft.com

  5. Computer Vision and Augmented Reality Applications City Driving Pattern Active Park Search Pattern R ecognition Image Next Gen of Adaptive Cruise Augmented Navigation Maths Processing COMPUTER VISION Arfificial Physics Intelligence Help in low visibility mode Signal Processing Signs recognition Infographics www.luxoft.com

  6. Key Challenges Of Bringing AR In The Vehicle • Usability – augmented reality subsystem should not disturb driver as it is continuously observed • Hardware limitations – computational, power consumption, zero latency (HUD) • Requirements for precise environmental model estimation for occlusion avoiding • Dependency on inaccurate map and navigation data • Distributed HW architectures, platform flexibility requirements • High precision absolute and relative positioning requirements • Components synchronization and latency avoidance • Embedded memory usage limitations, different memory models • Algorithms should be both configurable and efficient • Specific rendering requirements, not covered by general purpose frameworks • Variety of inputs under different platforms • Out-of-vehicle simulation (does not support natural simulation like classical navigation) www.luxoft.com

  7. Framework Concept www.luxoft.com

  8. Augmented Navigation Structure We offer a Unique Solution capable to create Augmented , mixed visual Reality for drivers and passengers based on Computer Vision , vehicle sensors, map data, telematics, navigation guidance using Data Fusion technique. Sensors/CAN Telematics/V2X CVNAR Solution Automotive Cameras Vehicle displays Projection on wind shield Smart Glasses VR devices Navigation System/Map Data www.luxoft.com

  9. CVNAR Features Road scene recognition and objects tracking Natural Augmented Reality • Road boundaries • Basic vehicle data • Lane detection • Lanes and road boundaries • Vehicle detection and tracking • Road signs and cautions • Distance and time to collision estimation • Navigation data and hints • Pedestrian detection and tracking • Facades highlights • Facade recognition and texture extraction • Parking places • Road signs recognition • Narrow street infographics • Parking slot recognition • Street names and complex junctions boards • POI and OEM specific information Positioning • Precise relative and absolute positioning Highlights • Flexible data fusion and smooth map matching • CPU, GPU; • Automotive constrained SLAM • OS: Linux, QNX • HW: Intel, NVidia, TI, Renesas • Extrapolation engines for latency avoidance Integration and Fusion • Sensor data • Machine learning and deep learning • External positioner data (optional) • External recognition engine integration • Pupil tracking • Telematics: V2I and V2V www.luxoft.com

  10. Hardware Approach: Computer Vision Box • Quick-install demonstration solution CVNAR Engine • Platform for CVNAR (allows to be portable) CVB Data Layer • Web Interface Integration with Head Units Head Unit SW Update • Integration with vehicle networks Configuration • Using of own sensors if needed Diagnostic Navigation data, preprocessed sensor data, etc. Live data from vehicle: CVB - CAN data, Sensors - Video stream Video Stream with augmented objects ADAS CVNAR Engine HUD/LCD Control/Settings www.luxoft.com

  11. Hardware Approach: Automotive Stack Own scalable and robust Automotive Stack aimed to minimize time of project start and integration • RTOS (OSEK, Micrium, mTRON), Microcontrollers (Renesas RH850/V850) • Hardware Abstraction Layer (HAL), Operation System Abstraction Layer (OSAL) • Trace Server/Client, WatchDog, IPC, Drivers (SPI, I2C, UART, Timers, etc.), SW Update • Pre-Integrated Vector CAN/Diagnostic Stack • Vehicle Bootloader for Renesas Microcontrollers Domains and areas: • System development and integration with Automotive Networks and ECUs • Drivers development and peripheral support: Video Cameras, Automotive Sensors, external HW • HW brings-up (BSP development) Automotive grade technologies supported by team: • Networks: CAN, LIN, MOST, BroadReach, Ethernet • RTOS: OSEK, mTron, Micrium, embOS, mTRON, QNX, Linux, VxWorks • Microcontrollers: Renesas (RH850, V850), Freescale (Bolero, i.MX6), TI (OMAP, Jacinto, MSP430) • CAN Stacks: Vector, KPIT, own tinyCAN • Audio/Video processing • Media Bus: LVDS, FPDLink-III, APIX2, USB, Ethernet, BroadReach www.luxoft.com

  12. Perception Concept www.luxoft.com

  13. Sensor Fusion: Data Inference Optimal fusion filter parameters adjustment problem statement and solution developed to fit different car models with different chassis geometries and steering wheel models/parameters. Features: • Absolute and relative positioning • Dead reckoning • Fusion with available automotive grade sensors – GPS, steering wheel, steering wheel rate, wheels sensors • Fusion with navigation data • Rear movements support • Complex steering wheel models identification. Ability to integrate with provided models • GPS errors correction • Stability and robustness against complex conditions – tunnels, urban canyons www.luxoft.com

  14. Sensor Fusion: Advanced Augmented Objects Positioning Solving map accuracy problems Position clarification: • Camera motion model: Placing: • Video-based gyroscope • Road model • Positioner • Vehicles Component detection • Road model • Map data • Objects tracking www.luxoft.com

  15. Sensors Fusion: Comparing Solutions Our solution Reference solution Update frequency ~15 Hz Update frequency ~4-5 Hz (+extrapolation with any fps) www.luxoft.com

  16. Lane Detection: Adaptability and Confidence www.luxoft.com

  17. Lane Detection: 3D-scene Recognition Pipeline • Low level invariant features • Single camera • Stereo data • Point clouds • Structural analysis • Probabilistic models • Real-world features • Physical objects • 3D scene reconstruction • Road situation • 3D space scene fusion (different sensors input) • Backward knowledge propagation from high levels www.luxoft.com

  18. Lane Detection: Additional Information • Features data base • Low level screen (3D) features to refine position • Points clouds • Marking details, Road borders • High level structural elements and real world objects • Junctions, Facades, Signs, etc. • Features Collection • Existing map providers • Real-time feature extraction and understanding from video sensors • Satellite-view photos analysis • Map database updates • Routes offline processing and upload www.luxoft.com

  19. Lane Detection: HD Map Potential Content • Simplified and advanced geometry for roads, traffic lanes, lanes boundaries etc. Application • Precise on-road vehicle positioning • Different weather, traffic situations Useful information: • • Map matching and Path planning Road border geometry and type • • Traffic signs (position and type); Maneuver suggestions • Traffic lights (position and type); • Cable navigation • Type and quality of roadbed; • Junction assistance • Roadside POIs (gas station, store, café etc.); • Possible junction maneuvers • Any other additional features which can be useful for • Traffic lights position vehicle positioning or driver. www.luxoft.com

  20. Lane Detection: Robustness in Normal Conditions Graphs show error in meters between recognized lanes (curved model) and recognized road marking (distance to detected features + features accuracy) in different distance range and for different road conditions. Figure 1. Regular weather, highway, slightly curved road Figure 2. Rainy weather, highway, straight road with lane with lane changes changes www.luxoft.com

  21. Lane Detection: Robustness in Difficult Conditions Figure 3. Bright sun, highway with secondary roads, Figure 4. Hard rain, highway, straight road with lane straight road with turns and lane changes changes www.luxoft.com

  22. Vehicle Detection • Convolutional neural network for vehicle detection • GPU Acceleration – CUDA • Running real-time on NVidia Jetson TK1 • Inference speedup on embedded (TK1) GPU vs CPU is ~3x • Training speedup on desktop GPU vs CPU is ~20x • Classifier accuracy (about 50k, 960x540, ~55-60 deg HFOV): • Positive: 99.65% • Negative: 99.82% • Size of detection down to 30 pix, detection range of about 60 m Figure – Vehicle detection examples www.luxoft.com

  23. Road Scene Semantic Segmentation • Deep fully convolutional neural network for semantic pixel-wise segmentation • Road scene understanding use cases: model appearance, shape, spatial-relationship between classes • Inference speedup GPU vs CPU is ~3x Figure – Road scene segmentation examples www.luxoft.com

Recommend


More recommend