Hacker Newsnew | past | comments | ask | show | jobs | submit | qarl's commentslogin

Yes, but they only count as 3/5 a person.

Not at all. NOT AT ALL.

There are shades of gray here. But you are absolutely not required to extend benefit of the doubt to entities that have not earned it. That's a recipe for disaster.

Personally, I find myself to be incredibly biased against corporations over people. I've met a lot of people in my life, they seem mostly nice if a bit stupid. Well intentioned. Selfish.

Are corporations mostly well intentioned? Well, consider that some people tried to put "good intentions" into corporations bylaws and has been viciously resisted.

Corporations will happily take everything you have if you accidentally give it to them. Actual human beings aren't like that.


> they are compressing the data beyond the known limits, or they are abstracting the data into more efficient forms.

I would argue that this is two ways of saying the same thing.

Compression is literally equivalent to understanding.


If we use gzip to compress a calculus textbook does that mean that gzip understands calculus?


Finding repetitions and acting accordingly on them could be considered a very basic form of understanding.


To a small degree, yes. GZIP knows that some patterns are more common in text than others - that understanding allows it to compress the data.

But that's a poor example of what I'm trying to convey. Instead consider plotting the course of celestial bodies. If you don't understand, you must record all the individual positions. But if you do, say, understand gravity, a whole new level of compression is possible.


Hm.

When I think to myself, I hear words stream across my inner mind.

It's not pages of text. It's words.


Yeah. People use their real identities on Facebook, and it doesn't help a bit.


> it doesn't help a bit.

I would replace "it doesn't help a bit" with "it doesn't solve the problem". My casual browsing experience is that X is much more intense / extreme than Facebook.

Of course, the bigger problem is the algorithm - if the extreme is always pushed to the top, then it doesn't matter if it's 1% or 0.001% - the a big enough pool, you only see extremes.


I bet if we didn't tolerate advertising and were instead optimising for what the user wanted we'd come up with something much more palatable.


A lot of this is driven by the user's behavior, not just advertising, though.

"The algorithm" is going to give you more of what you engage with, and when it comes to sponsored content, it's going to give you the sponsored content you're most likely to engage with too.

I'd argue that, while advertising has probably increased the number of people posting stuff online explicitly designed to try and generate revenue for themselves, that type of content's been around since much earlier.

Heck, look at Reddit or 4chan: they're not sharing revenue with users and I'd say they're at least not without their own content problems.

I'm not sure there's a convincing gap between what users "want" and what they actually engage with organically.


Reddit and 4chan both get their money from advertisers though, so they have an incentive to try to boost engagement above whatever level might be natural for their userbase.

Social interaction is integrated with our brain chemistry at a very fundamental level. It's a situation we've been adapting to for a million years. We have evolved systems for telling us when its time to disengage, and anybody who gets their revenue from advertising has an incentive to interfere with those systems.

The downsides of social media: the radicalization, the disinformation, the echo chambers... These problems are ancient and humans are equipped to deal with them to a certain degree. What's insidious about ad-based social media is that the profit motive has driven the platforms to find ways to anesthetize the parts of us that would interfere with their business model, and it just so happens that those are the same parts that we've been relying on to address these evils back when "social media" was shouting into an intersection from a soap box.


But neither Reddit nor 4chan really have the feed optimization that you'd find on Meta properties, YouTube, or TikTok.

I'm certainly not going to disagree with the notion that ad-based revenue adds a negative tilt to all this, but I think any platforms that tries to give users what they want will end up in a similar place regardless of the revenue model.

The "best" compromise is to give people what they ask for (eg: you manually select interests and nothing suggests you other content), but to me, that's only the same system on a slower path: better but still broken.

But anyway, I think we broadly are in agreement.


There's no need to belittle dataflow graphs. They are quite a nice model in many settings. I daresay they might be the PERFECT model for networks of agents. But time will tell.

Think of it this way: spreadsheets had a massive impact on the world even though you can do the same thing with code. Dataflow graph interfaces provide a similar level of usefulness.


I'm not belittling it, in fact I pointed to place where they work well. I just don't see how in this case it adds much over the other products I mentioned that in some cases offer similar layering with a different UX. It still doesn't really do anything to help with style cohesion across assets or the nondeterminism issues.


Hm. It seemed like you were belittling it. Still seems that way.


From the article:

Some ask: "Isn't backpropagation just the chain rule of Leibniz (1676) [LEI07-10] & L'Hopital (1696)?" No, it is the efficient way of applying the chain rule to big networks with differentiable nodes—see Sec. XII of [T22][DLH]). (There are also many inefficient ways of doing this.) It was not published until 1970 [BP1].


The article says that but it's overcomplicating to the point of being actually wrong. You could, I suppose, argue that the big innovation is the application of vectorization to the chain rule (by virtue of the matmul-based architecture of your usual feedforward network) which is a true combination of two mathematical technologies. But it feels like this and indeed most "innovations" in ML is only considered as such due to brainrot derived from trying to take maximal credit for minimal work (i.e., IP).


The real metric is whether anyone remembers it in 100 years. Any other discussion just comes off as petty.


every good thing i ever did i did because it was fun.


Each billionaire provides his own evidence as to why billionaires should not exist.

Don't worry guys - I'm sure there won't be a violent revolution this time.


You're thinking of NeXTSTEP. Before OS X.


NeXTSTEP was Display PostScript. MacOS X uses Display PDF since way back in the developer previews.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: