Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, according to Karpathy. The cars upload data (typically still images) which are then labeled by humans and used as training data to improve the models.


The impression I got from GTC a few years ago was that they're probably doing synthetic graphics simulations, based on edge cases identified by real world data.

You don't really need 1000x real examples of a scenario, if one still image provides enough information for you to mimic it in simulation. (E.g. toppled barrel, weathered paint color and reflective stripe, in front of exit guard rail)


I don’t have the link, but watch Karpathy’s recent talk on this topic on YouTube. He goes into detail about how this works. I’m not sure what their approach was years ago but it has changed significantly since.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: