Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Asking the models to determine if my code is equivalent to what they reverse engineered resulted in a nuanced and thorough examination, and eventual conclusion that it is equivalent.

Did you actually implement to see if it works out of the box ?

Also if you are a free users or accepted that your chats should be used for training then maybe o1 is was just trained on your previous chat and so now knows how to reason about that particular type of problems



That is an interesting thought. This was all done in an account that is opted out of training though.

I have tested the Python code o1 created to decode the timestamps and it works as expected.


That's not how LLM training works.


so it is impossible to use the free user chats to train models ??????




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: