Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Informed consent for a paying user is inconvenient?


Did you read the TOC?


Hiding something like this in the TOC rather than explicitly asking users to opt in is a dark pattern. You can't gain the moral highground by cackling that someone should have read the fine print.


This is technology, some politics, capitalism, and math that is trained on curiously gained data, where does non-selfish morality come in?


All TOS essentially boil down to "we owe you nothing and can change the product at anytime to anything we want at our sole discretion"

Obviously it would be unreasonable to accept such terms without further context. The further context in this case being that Anthropic will maintain Claude as an AI agent and seek to improve it's performance. What is at the heart of this issue is whether or not Anthropics recent A/B testing violated that context. Not whether or not they violated the TOS (they didn't, obviously)


Ultimately that just sounds like within their own TOC, they were just working on getting the best operational results.

If you wanted something more deterministic write it yourself or get it verified, all hosted llms as far as know does neither.


I read the article saying they were testing service changes on paying users without knowledge or explicit consent of the user, that the user had to test and determine why they perceived their sevice changed.

That is a dark pattern to inflict on users expecting consistent output.


Does anyone?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: