Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As someone who is woefully uninformed about ML and CV: in a case like this, is the computer simply given the bitmap/vector graphic and left to learn how to interpret the data as lines and shapes? Or is there some preprocessing done to transform the data into something more "ML-friendly"?


In Deep Learning, you can feed the raw pixels into the network and it will interpret the data as lines and shapes ..

The only preprocessing done is likely size reduction - the QuickDraw canvas is quite high resolution, so it's probably scaled down with (just linear interpolation)


Thanks. Come to think of it, I remember seeing a series of deep learning examples that were trained on raw data and were able to create well-formed TeX, XML, and other formats, so it makes sense that the same principle would apply to image data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: