Home / News / The iPhone Added Lidar Before Tesla

The iPhone Added Lidar Before Tesla

LiDAR works by emitting directional beams of invisible light that bounce off nearby objects. These beams are reflected back to the LiDAR unit and a measurement for the Time of Flight—the time it took for the beam to leave the emitter and reach the receiver—is used to paint a digital picture of the unit’s surroundings. That includes how far away the object is from the sensor. The unit will then build a map of its surroundings by processing all of the measurements into a three-dimensional point cloud.

For the iPhone 12, the LiDAR sensor is worked into Apple’s ARKit to supplement its current vision-based augmented reality offering. The inclusion of LiDAR can help make camera autofocusing quick and accurate while also placing the Snapchat dancing hotdog on surfaces with better accuracy.

But for vehicles with partial automation, LiDAR can be used to help improve precision in self-driving decisions. Unlike the iPhone’s static LiDAR unit, vehicles doing autonomy testing typically have large rotating contraptions to the roof of their vehicles. Waymo calls its own solution the Laser Bear Honeycomb. These supplement vision-based systems by adding a layer of data on top of what the cameras can process. For example, if it’s too dark for a camera to pick up certain objects, the LiDAR unit can still recognize that something is in its path.

A prime example of this calls back to the fatal crash involving a partially automated Uber vehicle and pedestrian in Tempe, Arizona. The LiDAR unit, which was supplied by Velodyne, picked up the presence of 49-year-old Elaine Herzberg six seconds before the accident. However, the systems in place which processed the data received from the LiDAR unit failed to recognize that an emergency braking procedure was necessary to avoid a collision until 1.3 seconds before impact.

Likewise, LiDAR can pick up the presence of stationary objects. Tesla’s Autopilot has undergone criticism for its failure to recognize stopped vehicles, including emergency vehicles and overturned trucks. So while it is certainly more affordable, the potential shortcomings of not being able to recognize such objects due to lighting, weather conditions, or a myriad of other reasonings seem to be an inherently large risk for a company that supposedly plans to launch a million robotaxis by the end of the year.  

And this hasn’t been the first example of Tesla’s Autopilot being fooled by something seen on its cameras. A great example of this is the Model S that was tricked into accelerating due to some black tape slapped onto a speed limit sign. You can read more about that here, but the point is the same: a visual cue caused the vision-based software to behave unexpectedly. Remember—if you can trick a human, chances are that you can also trick a camera.

Musk has historically been anti-LiDAR; however, it would be naive to disregard the shortcomings of vision-based autonomy in its current state. Does this mean that Tesla can’t solve its problems with more mature software? No, and the company is looking to do exactly that by launching testing for its rewritten “Full Self-Driving” suite as early as next week. But maybe it would be wise to predict that as partial autonomy grows, a hybrid of LiDAR and vision-based systems may be the key to rapid development.

At any rate, know that your next potential phone may have a key feature a Tesla doesn’t have. 

Got a tip? Send us a note: tips@thedrive.com


Source link

Check Also

U-2 Spy Plane Got New Target Recognition Capabilities In First Ever In Flight Software Updates

Will Roper, the Assistant Secretary of the Air Force for Acquisition, Technology, and Logistics, has ...

Leave a Comment:

Your email address will not be published. Required fields are marked *