Subscribe

Lyft is Tapping its Drivers to Collect Data for Improving its Self-Driving Vehicles and Build HD Maps

Lyft is Tapping its Drivers to Collect Data for Improving its Self-Driving Vehicles and Build HD Maps

Author: Eric Walz   

The robust software powering today's self-driving vehicles is not static, its continuously evolving and being refined using AI and machine-learning algorithms so autonomous vehicles (AVs) can better handle unexpected driving scenarios. 

However, in order to do this, massive amounts of real-world data is needed to train AI-powered autonomous driving systems, so they can be improved over time. Ride-hailing company Lyft, is turning to its drivers for assistance in gathering all of this data. Turns out that simple dash cams used by Lyft's drivers are an ideal way to collect data while driving through urban areas.

Some drivers on Lyft's ride-hailing platform are using small, low-cost dash cameras to collect footage of intersections, bicyclists, pedestrians, as well as the behavior of other drivers while the Lyft driver is out and about picking up passengers.

By collecting the driver data, Lyft is helping to accelerate the development of autonomous driving technology as its highly valuable to improve the software and AI that powers the self-driving development vehicles.

Since Lyft's service its accessible to 95% of the U.S. population, it's one of the largest datasets in the world of real world driving scenarios.

The real-world data is used to train machine learning models so Lyft's engineering teams can better understand how human drivers behave in various traffic situations. For example, Lyft's data can be used to better predict how fast drivers travel on a particular stretch of road, or the probability that a human driver will enter an intersection when the traffic signal turns yellow. 

The data can also be used to produce highly detailed 3D maps, which are an essential tool for self-driving vehicles to navigate with.

Using Driver Data to Improve HD Maps 

Using rideshare data, Lyft was able to build city-scale 3D geometric maps using technology developed by Blue Vision Labs, an augmented reality software company acquired by Lyft in 2018

High-definition maps contain much more detailed information compared with traditional maps. For instance, the maps used by self-driving vehicles contain the exact position of each lane and traffic light information of intersections, such as if an intersection has a traffic light that allows vehicles to make protected turn lefts. The HD maps even include elevation data and the location of street signs.

Lyft generates this information from its rideshare data by using a combination of 3D computer vision and machine learning to automatically identify traffic objects from the camera feed, such as other vehicles, pedestrians and road signs.

All of this information helps Lyft's engineers understand how drivers behave in risky situations, like a driver running a particular red light or failing to properly yield.

Lyft said it mapped thousands of miles using the wide geographic coverage of the vehicles on its ride-sharing network. Lyft continuously updates its maps from data collected from each trip. After it's collected, the data is immediately logged each time a ride is completed.

While mapping operations teams can build 3D maps for AVs, keeping them up-to-date is a challenge without continuously updated information. For example, a lane closure due to construction needs to be included in the 3D maps and pushed out to Lyft's autonomous vehicles for navigation. By using real data from driver's that traverse this area, Lyft can better train its AVs to better navigate through the hazard.

In addition to assisting Lyft to build 3D geometric maps, data rideshare network data helps the company to better understand human driving patterns. Using visual localization technology, Lyft is able to track the real-world trajectories that Lyft drivers follow when making turns or traveling in a lane with a greater level of accuracy. 

This helps Lyft's self-driving vehicles to maintain the optimal location in their lane based on human driving patterns, so the software more closely mimics how drivers navigate urban road environments.

The data helps Lyft determine the optimum lane position for its AVs that more closely match how human drivers behave.

The data also helps Lyft's autonomous vehicles to better handle aggressive drivers and those who don't always obey road rules. 

For example, if drivers are constantly being cut near a busy intersection where traffic merges, the data can be used to tune the software to anticipate this behavior and determine the appropriate deceleration profile, instead of having to slam on the brakes after being cut off by an aggressive driver. 

This enables Lyfts's AVs to respond more safely in similar situations and behave more like a human driver, lessening the anxiety for passengers that will one day ride in Lyft's self-driving vehicles.

For developers of self-driving vehicles, building highly detailed maps and better understanding human driving behavior is critical. This data driven approach can help accelerate AV development, allowing developers to better address motion planning challenges of navigating in an urban environment.

By using this approach, Lyft is not just solely relying on previous AV trips or computer simulation simulation environments to determine how its vehicles should behave. Instead the company is leveraging real data from one of the largest ride-sharing networks in the world.

Lyft is not alone in its efforts. Waymo, which spun out of Google's self-driving car project, is also sharing its vehicle data with researchers. 

In August 2019, Waymo released its "Waymo Open Dataset" for researchers and developers working on autonomous driving and other related mobility projects. Waymo says its dataset is the largest, richest, and most diverse self-driving dataset ever released for research purposes. 

Like Lyft, the data was collected by a fleet of Waymo self-driving vehicles that traveled over 10 million miles in 25 different cities. 

The dataset includes high-resolution sensor data covering a wide variety of environments, including dense urban areas and suburban streets. That data was also collected in a wide variety of real-world conditions, including day and night, bright sunlight and rain.

Waymo's own engineers use the same dataset to develop self-driving technology and innovative machine learning models and algorithms. With the release of Waymo's dataset, engineers outside of Waymo are getting access to the same data the Waymo's uses for the first time ever.

Eric Walz
Eric Walz
Originally hailing from New Jersey, Eric is a automotive & technology reporter covering the high-tech industry here in Silicon Valley. He has over 15 years of automotive experience and a bachelors degree in computer science. These skills, combined with technical writing and news reporting, allows him to fully understand and identify new and innovative technologies in the auto industry and beyond. He has worked at Uber on self-driving cars and as a technical writer, helping people to understand and work with technology.
Recommended
Prev:Mercedes Benz & NVIDIA Are Developing a New Software-Based Vehicle Architecture That Supports OTA Updates Next:Toyota’s Annual Governance Reports Reveals That it Holds a $293 Million Stake In Uber
Comment
    view more