6 Comments
⭠ Return to thread

The architecture you describe is similar to that of Tesla's FSD. All the data/compute remains resident on the car, and is not transferred to Tesla (in this case it's not because of Tesla's concern for privacy, it's simply a matter of latency.) The driver does sign up to allow Tesla to interrogate the car's data for FSD debugging and training purposes, in which case - and in connection with the specific event/accident - Tesla can identify the car (which functionally means the driver as well). Nonetheless it proves that all the compute necessary to run incredibly complex multi-domain ML processes can be localized and miniaturized (in comparison to the giant compute facilities Waymo et all install in their cars.)

Expand full comment

Interesting! I knew vaguely that Tesla built its own silicon to handle the FSD locally, but didn't realize it was such a total on-car displacement of the compute. Waymo does the same....but just not as efficiently?

Expand full comment

Right. Tesla have done an incredible job of providing a powerful, integrated, fault tolerant stack with extremely low power consumption in-car. Waymo's stack is huge, legacy and limited (geo-fenced, so localized data, with enormous numbers of data points mapped). Takes a formidable amount of power and space!

Expand full comment