Don't have a Xilinx account yet?

  • Choose to receive important news and product information
  • Gain access to special content
  • Personalize your web experience on Xilinx.com

Create Account

Username

Password

Forgot your password?
XClose Panel
Xilinx Home

Must-See Video: Google’s self-driving car keynote at last month’s Embedded Vision Summit

by Xilinx Employee on ‎06-10-2014 10:00 AM (1,713 Views)

Nathaniel Fairfield, Technical Lead at Google, gave the keynote at last month’s Embedded Vision Summit West and spoke about self-driving cars. The Google Self-Driving Car project was created to rapidly advance autonomous driving technology based on laser sensors, cameras, and radar coupled with a detailed, highly annotated, and constantly updated map of the world. Google's self-driving cars have now traveled nearly a million miles autonomously.

 

In his deeply informative, hour-long talk, Fairfield discusses the cars’ capabilities in detail, Google's overall approach to solving a huge number of diverse driving problems (weather, modeling the expectations of other drivers, lane-splitting motorcycles, occluding objects like other vehicles, hidden pedestrians dashing into view, railroad crossings, getting permission to put a self-driving car on the road, etc.), and the remaining challenges to be resolved (like squirrels, snow, and no room on the car’s roof for a ski rack because of the rooftop laser). Fairfield also takes an enlightening detour to talk about the challenges of machine vision with some interesting and novel revelations such as the use of QR codes for instant locality identification.

 

Fairfield also talks about the recently announced Google autonomous electric vehicle, which is the direct result of this research. It’s got a soft nose to reduce pedestrian injuries, a speed governor set to 25mph, and no steering wheel. It’s an entirely new take on solving urban transportation problems.

 

The video ends with 20 minutes of informative, not-to-miss Q&A following the presentation. The first question was about computational load. No surprise. Sensor fusion/analysis and vision processing are the big computational loads.

 

Fairfield’s keynote was representative of the high-quality material presented at the recent Embedded Vision Summit West. You can see the presentation on the Embedded Vision Alliance page or just watch below.

 

 

 

 

 

About the Author
  • Steve Leibson is the Director of Strategic Marketing and Business Planning at Xilinx. He started as a system design engineer at HP in the early days of desktop computing, then switched to EDA at Cadnetix, and subsequently became a technical editor for EDN Magazine. He's served as Editor in Chief of EDN Magazine, Embedded Developers Journal, and Microprocessor Report. He has extensive experience in computing, microprocessors, microcontrollers, embedded systems design, design IP, EDA, and programmable logic.