How Tesla's Autopilot works

Recently, Andrej Karpathy, Head of Artificial Intelligence (AI) on Tesla’s Autopilot Team, offered several interesting insights on how Autopilot works. Read ARK Invest's key takeaways.

How Tesla's Autopilot works

April 27, 2020
Recently, Andrej Karpathy, Head of Artificial Intelligence (AI) on Tesla’s Autopilot Team, offered several interesting insights on how Autopilot works. Read ARK Invest's key takeaways.
Read Transcript

Recently, Andrej Karpathy, Head of Artificial Intelligence (AI) on Tesla’s Autopilot Team, offered several interesting insights on how Autopilot works in a video. First, he detailed how Tesla trains Autopilot with its customer-collected data feed using long tail examples like stop signs painted on buildings or those occluded by tree branches. To improve Autopilot’s reaction to these “corner cases”, Tesla sends a software detector to its 800,000+ vehicle fleet to identify images of, say, occluded stop signs. In contrast, with fleets of hundreds instead of hundreds of thousands driving in several cities instead of nationwide, GM’s Cruise Automation and Waymo have limited access to data on corner cases in training their vehicles.

Karpathy also explained how Tesla is bridging the gap between cameras and LiDAR. We previously heard about an Autopilot update involving 3D video labeling as opposed to 2D image labeling, enabling faster, more accurate image detection and path planning - Tesla’s solution to the LiDAR gap. LiDAR recognizes images and assesses direct depth more accurately than do cameras. In a two-step indirect process, cameras take shots of images and software gauges depth by analyzing the pixels. Mistakes involving a few pixels can translate into meters or yards of inaccuracy. By labeling 3D videos of driving scenes, Tesla is compensating for the camera’s weakness as the primary image sensor in its vehicles.

Karpathy also discussed Tesla’s local mapping which includes much less detail than the high definition maps its competitors use. While a Waymo vehicle drives with preloaded information about the exact location of a stop sign, within centimeters of accuracy, a Tesla would detect only the presence of a stop sign somewhere in the vicinity. In addition to vague local maps and its camera-based approach, 3D video labeling separates Tesla from its competitors, enabling the recognition of corner cases in solving for full autonomy.

We believe Tesla’s approach is highly differentiated and will be almost impossible for a competitor to replicate. While autonomous driving is an extremely complex problem to solve, Tesla could enjoy a near-monopoly in autonomous ride hailing if it is successful.

 

Catherine Wood, CEO and CIO of ARK Invest pitched Tesla at the 2019 Sohn Hearts & Minds Conference.

----

ARK's statements are not an endorsement of any company or a recommendation to buy, sell or hold any security. For a list of all purchases and sales made by ARK for client accounts during the past year that could be considered by the SEC as recommendations, click here. It should not be assumed that recommendations made in the future will be profitable or will equal the performance of the securities in this list. For full disclosures, click here.

Recently, Andrej Karpathy, Head of Artificial Intelligence (AI) on Tesla’s Autopilot Team, offered several interesting insights on how Autopilot works in a video. First, he detailed how Tesla trains Autopilot with its customer-collected data feed using long tail examples like stop signs painted on buildings or those occluded by tree branches. To improve Autopilot’s reaction to these “corner cases”, Tesla sends a software detector to its 800,000+ vehicle fleet to identify images of, say, occluded stop signs. In contrast, with fleets of hundreds instead of hundreds of thousands driving in several cities instead of nationwide, GM’s Cruise Automation and Waymo have limited access to data on corner cases in training their vehicles.

Karpathy also explained how Tesla is bridging the gap between cameras and LiDAR. We previously heard about an Autopilot update involving 3D video labeling as opposed to 2D image labeling, enabling faster, more accurate image detection and path planning - Tesla’s solution to the LiDAR gap. LiDAR recognizes images and assesses direct depth more accurately than do cameras. In a two-step indirect process, cameras take shots of images and software gauges depth by analyzing the pixels. Mistakes involving a few pixels can translate into meters or yards of inaccuracy. By labeling 3D videos of driving scenes, Tesla is compensating for the camera’s weakness as the primary image sensor in its vehicles.

Karpathy also discussed Tesla’s local mapping which includes much less detail than the high definition maps its competitors use. While a Waymo vehicle drives with preloaded information about the exact location of a stop sign, within centimeters of accuracy, a Tesla would detect only the presence of a stop sign somewhere in the vicinity. In addition to vague local maps and its camera-based approach, 3D video labeling separates Tesla from its competitors, enabling the recognition of corner cases in solving for full autonomy.

We believe Tesla’s approach is highly differentiated and will be almost impossible for a competitor to replicate. While autonomous driving is an extremely complex problem to solve, Tesla could enjoy a near-monopoly in autonomous ride hailing if it is successful.

 

Catherine Wood, CEO and CIO of ARK Invest pitched Tesla at the 2019 Sohn Hearts & Minds Conference.

----

ARK's statements are not an endorsement of any company or a recommendation to buy, sell or hold any security. For a list of all purchases and sales made by ARK for client accounts during the past year that could be considered by the SEC as recommendations, click here. It should not be assumed that recommendations made in the future will be profitable or will equal the performance of the securities in this list. For full disclosures, click here.

Disclaimer: This material has been prepared by ARK Invest, published on Apr 27, 2020. HM1 is not responsible for the content of linked websites or content prepared by third party. The inclusion of these links and third-party content does not in any way imply any form of endorsement by HM1 of the products or services provided by persons or organisations who are responsible for the linked websites and third-party content. This information is for general information only and does not consider the objectives, financial situation or needs of any person. Before making an investment decision, you should read the relevant disclosure document (if appropriate) and seek professional advice to determine whether the investment and information is suitable for you.

facebook
linkedin
All
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
No items found.