Xilinx & Motovis Partner on a Hardware and Software Solution for Autonomous Vehicle Cameras Powered by a Neural Network

Xilinx & Motovis Partner on a Hardware and Software Solution for Autonomous Vehicle Cameras Powered by a Neural Network

Author: Eric Walz   

Semiconductor manufacturer Xilinx Inc. (NASDAQ: XLNX), the inventor of the field programmable gate array (FPGA), and Motovis, a provider of embedded AI for advanced driver assist systems (ADAS) and autonomous driving, announced a new collaboration on perception system built on the Xilinx Automotive Zynq system-on-chip (SoC) that can support forward facing cameras for autonomous driving.

The advanced hardware supplied by Xilinx will be further supported with convolutional neural network (CNN) IP developed by Motovis. The hardware and software serves as a base layer to support vision processing and perception.

The two companies are targeting the $2.4 billion automotive forward camera market by providing carmakers with an AI-powered hardware and software solution to support robust computer vision processing. It also results in faster time to market.

It opens up new business oppourtunities for Xilinx in the auto industry, as the company normally provides just programmable chips to automakers and Tier 1 suppliers. With this new product, the hardware will come bundled with deep-learning software from Motovis.

In the auto industry, forward-facing cameras support ADAS and automated driving features including lane-keeping assistance (LKA), automatic emergency braking (AEB), automated valet parking and adaptive cruise control (ACC). 

The hardware and deep-learning software developed by Xilinx and Motovis supports image resolutions up to eight megapixels. 

"This collaboration is a significant milestone for the forward camera market as it will allow automotive OEMs to innovate faster," said Ian Riches, vice president for the Global Automotive Practice at Strategy Analytics. "The forward camera market has tremendous growth opportunities, where we anticipate almost 20 percent year-on-year volume growth over 2020 to 2025. 

Combining neural network IP developed by Motovis that's able to run on the Xilinx Zynq system-on-chip (SoC) results in a cost-effective solution that offers low-latency image processing, flexibility for different applications, as well as scalability, so automakers can combine it with a camera and add it production vehicles.

"Motovis' expertise in embedded deep learning and how they've optimized neural networks to handle the immense challenges of forward camera perception puts us both in a unique position to gain market share, all while accelerating our OEM customers' time to market," said Willard Tu, Senior Director of Automotive, Xilinx. "We're thrilled to bring this to life and drive the industry forward."

Motovis was founded in 2015 and is based in Shanghai. The company developed an entire suite of deep learning and computer vision algorithms that can be used to help autonomous vehicles to safely navigate. More importantly, the company's vision-based algorithms are designed to be lean and run more efficiently on compact embedded processors such as the Xilinx Zynq SoC, and therefore require much less compute power. 

The partnership is the latest example of a tech company making inroads in the auto industry. Xilinx, based in San Jose CA, is globally recognized as the inventor of the FPGA and adaptive SoCs. Like rivals NVIDIA and Intel Corp, Xilinx is pushing into the automotive space as the demand for high performance SoCs that can support automated driving functions increases.

As vehicles become electrified and more like "computers on wheels" with capabilities such as autonomous driving, automakers are adding more high-tech hardware and software to their vehicles to power these features. For example, many vehicles today are equipped with radar, cameras, and soon lidar, to support ADAS or autonomous highway driving. 

As a result, automakers are turning to tech companies like Xilinx and Motovis with expertise in the fields of computer vision, AI, and software and semiconductors to add advanced capabilities to their vehicles. As the needs of automakers evolve, much of this technology is being developed by companies that were traditionally outside of the auto industry.

To support its CNNs, Motovis has collected 4.4 million miles of real world road data in 30 provinces and autonomous regions in China over three years. The data is labeled and used to continuously train deep-learning algorithms to better identify what a camera sensor captures.

The Motovis IP also supports Visual Simultaneous Localization & Mapping (vSLAM), a complex process of calculating the position and orientation of a vehicle based on the surroundings, while at the same time mapping the environment in 3D using only visual inputs from a camera. 

Using only inputs from the camera, vSLAM technology can generate maps which autonomous vehicles can use to determine their precise position and safely navigate.

The FPGAs pioneered by Xilinx offer a more flexible and programmable hardware platform for automakers.

The solution scales across Xilinx's 28nm and 16nm XA Zynq family of SoCs using a combination of optimized hardware and software with customizable CNNs that host Motovis' deep learning networks. The result is a perception solution at different performance levels and price points, depending on the application.

"We are extremely pleased to unveil this new initiative with Xilinx and to bring to market our CNN forward camera solution. Customers designing systems enabled with AEB and LKA functionality need efficient neural network processing within an SoC that gives them flexibility to implement future features easily," said Dr. Zhenghua Yu, CEO, Motovis.

"For the first time, OEMs and Tier-1 suppliers can now layer their own feature algorithms on top of Motovis' perception stack to differentiate and future-proof their designs," said Dr Yu.

Xilinx also has other partners in the automotive space. In Sept 2020, the company teamed up with automotive supplier Continental to create the first production ready 4D radar for autonomous vehicles. The new advanced radar sensor (ARS) from Continental is called the "ASR540" and it was built on the Xilinx Zynq UltraScale multiprocessor system on a chip (MPSoC).

In June, Xilinx announced its latest hardware product for automotive, robotics and aerospace applications called the "Versal AI Edge." The Versal AI Edge series is the newest member of the company's Versal Adaptive Compute Acceleration Platform (ACAP).

ACAP is a fully software-programmable compute platform that achieves dramatic performance improvements of up to 20X over today's fastest FPGA implementations and over 100X over today's fastest CPU implementations for automotive ADAS applications, according to Xilinx. 

Xilinx and Motovis will be speaking at the "Xilinx Adapt 2021" virtual event on Sept 15. The event will feature executive keynotes with appearances from partners and customers, along with a series of presentations designed to showcase Xilinx's adaptive computing solutions.

Eric Walz
Eric Walz
Originally hailing from New Jersey, Eric is a automotive & technology reporter covering the high-tech industry here in Silicon Valley. He has over 15 years of automotive experience and a bachelors degree in computer science. These skills, combined with technical writing and news reporting, allows him to fully understand and identify new and innovative technologies in the auto industry and beyond. He has worked at Uber on self-driving cars and as a technical writer, helping people to understand and work with technology.
Prev:Apple's Next iPhone Will Reportedly Be Able to Connect to Satellites, Can Tesla’s Vehicles Do the Same? Next:Hyundai and its Autonomous Driving Partner Motional Reveal the Next-Generation IONIQ 5 Robotaxi
    view more