Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> a) Microsoft pays for compute. OpenAI never intended to run their own cloud.

No, Microsoft INVESTED in OpenAI and in exchange became OpenAI's exclusive cloud provider. That doesn't mean they pay for the compute of all customers.

> b) Apple's 15%/30% is a "cost of channel" for developers. It is the fee they pay to get free users delivered to their app. And the decision to do this is purely a business one that can be changed at will and does not apply at all to this situation.

Not sure what you meant to say here. Purchasing of OpenAI "premium" would be handled via Apple's payment platform, for which Apple will likely take a cut. Regardless, OpenAI's gamble would be to onboard ~100M iPhone users under a "free" tier in a single rollout step, in hopes that the conversion-rate to an OpenAI premium will be high enough to offset the (infrastructure!) cost of serving all those users for free

> c) Calling Apple a gatekeeper for implementing additional features to their OS is odd to the say the least. They have made it clear during the keynote/WWDC presentations that they plan to add additional providers than OpenAI.

Apple is the gatekeeper because they are in control of the decision to offload AI-tasks to a 3rd party service or do them "in-house" (on-device/Apple-cloud), This doesn't change regardless whether they stated this upfront at the keynote/WWDC or not.

Because of this structure, Apple is are aware of every single AI-task that is to be done and will use this information to influence their in-house roadmap (a natural decision, because on-device is better than cloud, and Apple-cloud is better because <insert tech of Apple cloud here>).

At some point in the future, the user will be able to select WHICH 3rd party he wants to use, but so far I don't see any indication that Apple considers themselves as a 3rd party.

And even if they would, they hold more information than all other providers because they FIRST decide whether a task can be handled on-device or not.



Do they actually hold more information, though? If it’s handled on-device, Apple never knows about it. If it’s sent to something other than ChatGPT, Apple can’t even read it. This seems like a pretty level field for anybody who wants to step in.


That's the misleading part of all of Apple's statements about privacy. Apple only ever talks about your "personal data", they let the audience conclude what that means.

A picture is personal data. Information about the content of that picture anonymized in a way that the user/subject cannot be identified, is not personal data anymore.

A profile generated from all your usage-habits combined is not "personal data" anymore.

Not a popular statement, but I believe the main reason for Apple pushing on-device ML-models is to extract non-personal information from its devices without the need for the "personal information" to be seen by Apple. There's even a paper on that [1]

Compute and storage is paid by the user, only the extracted data is delivered to Apple.

Beyond that, Apple's entire effort on privacy conveniently ensures that only they know everything about their users, and ensure that the customers don't (accidentally) tell someone else something about themselves.

[1] https://machinelearning.apple.com/research/learning-with-pri...


Ok so let’s play it out. I look at a picture of a cat, and Apple receives a completely anonymized log saying someone somewhere looked at a picture of a cat. This log has no information about me and cannot be tied back to me. Am I supposed to be concerned this is an affront to my privacy?

People want big data tech, most people think this is fine and not a privacy invasion. Apple’s approach to privacy is leagues better than e.g. Meta or Google. If you want no record, no accounts, no logs for anything ever, don’t buy a smartphone.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: