Hacker Newsnew | past | comments | ask | show | jobs | submit | timoth3y's commentslogin

I think if we are going to ban people under 16 from social media, we should also ban people over 70 from social media.

At least as much mental and societal damage is done by elderly falling for bigoted, scammy, manipulative nonsense online than by teenagers having their self-esteem lowered.

As recent holiday gatherings have shown us, the young handle social media far better then the elderly.

/s


I think a lot of this has to do with the explosion of CEO (and by extension CxO) pay over the past 30 years.

Today, a CEO can turn in a few quarters of really solid earnings growth, they can earn enough to retire to a life a private jets. Back when CxO pay was lower, the only way to make that kind of bank was to claw your way into the top job and stay there for a decade or more.

The current situation strongly incentivizes short-term thinking.

With today's very high, option-heavy compensation a CEO making long-term investments in the company rather than cutting staff and doing stock buybacks is taking money out of his own pocket.

It's a perverse incentive.


CEO’s also never face consequences for destroying companies. Zaslav has run WBD into the ground and it’s currently being surrounded by vultures, and he’s still making like half a billion a year.


I wish I could find the article about it that I read a few years back. But CEOS needs skin in the game again. the incentives are all broken. running a good business doesn't matter anymore (at least in the US).


While I definitely agree CEO pay is quite egregious, in theory, to mitigate short-sighted quarterly earnings hyperoptimization, couldn't a board simply tie equity incentives to performance targets and timeframes though?

Lip Bu Tan, for instance, has performance targets on a five year timeline, which are all negated if the stock falls below a certain threshhold in 3 years. [1]

Or, ever controversial Elon Musk, certainly has an (also egregious) $1 Trillion dollar pay package, but it has some pretty extreme goals over 10 years, such as shipping 1 million Optimus robots [2].

All in all, we can debate about the Goodharting of these metrics (as Musk is keen to do), but I feel boards of these public companies are trying to make more long-term plans, or at least moving away from tying goals to pure quarterly metrics. Perhaps we can argue about the execution of them.

Note: I own neither of these stocks and my only vested interest is buying the S&P.

[1] https://www.cnbc.com/2025/03/14/new-intel-ceo-lip-bu-tan-to-... [2] https://www.bbc.com/news/articles/cwyk6kvyxvzo


The "us" AI is making rich is not the same "us" as the "us" AI is making unhappy.


> "we don't care why or how it works - we want to make the outcome happen".

That's the primary difference between science and engineering.

In science, understating how it works is critical, and doing something with that understanding is optional. In engineering getting the desired outcome is critical, and understanding why it works is optional.


> Every US president in history has left office peacefully, the most probable outcome is that future presidents will too.

Trump did not leave office peacefully last time. The most probable outcome is that he will not leave office peacefully next time.


The TV series Columbo was a brilliant inversion of the British deceive story. Naturally, every story started with an upper-class murder, but from the start the audience was shown who the killer was and how the crime was committed.

The "mystery" was how the detective was going to figure it out.


It's called an "inverted mystery", originated by R. Austin Freeman in 1909.

There's a good piece on that form here: https://mysteriesahoy.com/2019/01/26/five-to-try-inverted-my...


At it's core, it is a systems problem.

When emergency powers are granted to the same person that has the power to declare the emergency, those powers are effectively no longer restricted to emergencies.

The exception will eventually swallow the rule.


There is some important nuance needed.

Plato was not against writing. In fact, he wrote prolifically. Plato's writings form the basis of Western Philosophy.

Plato's teacher Socrates was against writing, and Plato agreed that writing is inferior to dialog in some ways; memory, inquiry, deeper understanding, etc.

We know this because Plato wrote it all down.

I think it would be more accurate to say that Plato appreciated the advantages of both writing and the Socratic method.


> I have come to the belief that corporations are persons not only in law, but are persons also in reality. Their legal personalities are only the recognition of real, underlying, group personalities.

The author's (along with "many philosophers'") entire line of argument is argument from analogy. She lists many aspects of corporations that are similar to aspects possessed by humans and then conclude they are the same thing.

The same argument could be made with humans and ducks and it would be just as valid.

The author's view that corporations are actually people is becoming more common not because it is rational, but because it is profitable for corporations to be able to claim various kinds of personal rights.


> Update Federal procurement guidelines to ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias

If foundation model companies want their government contracts renewed, they are going to have to make sure their AI output aligns with this administration's version of "truth".


I predicted that here, but I got a negative vote as a punishment, probably because it went against the happy LLM mindset: https://news.ycombinator.com/item?id=44267060#44267421


EU AI act suddenly not looking so bad, huh?


no. it still looks bad.


idk man, at least it doesn't require LLMs to follow the ideology of the regime...


> free from top-down ideological bias

This phrasing exactly corresponds to "politically correct" in its original meaning.


Zizek would have a field day with this.

> I already am eating from the trashcan all the time. The name of this trashcan is ideology. The material force of ideology - makes me not see what I'm effectively eating. It's not only our reality which enslaves us. The tragedy of our predicament - when we are within ideology, is that - when we think that we escape it into our dreams - at that point we are within ideology.

https://www.youtube.com/watch?v=TVwKjGbz60k


Freedom hurts.


“Objective” … “free from to-down ideological bias” …

So like making sure everyone knows that 2+2=5 and that we have always been at war with East Asia?


The EU has the same rules. Democracy is only the right to change leaders every few years, not an idealistic way for the people to govern.


The idea is that you change leadership with those who have genuine alignment with subjects' preference for certain policies or ideas, it is not about electing kings who may demand "machines must agree that the Emperor is not naked".


No, that’s just one version. Other places work differently.


Ok. Then my parliament should allow 1/68-millionth of vote to every French people. Usually the counter-argument is “But people will vote for themselves! They will vote stupid laws without informing themselves!”

So no, democracy isn’t the ability to govern. It’s the ability to change those who govern, once every 5 years, i.e once every 4600 laws.


I’d rather leave tech than convert to the American “truth”. Very happy about EU’s AI Act to at least delay our exposure to all this.


See:

> In the AI context, DEI includes the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex. DEI displaces the commitment to truth in favor of preferred outcomes and, as recent history illustrates, poses an existential threat to reliable AI.

https://www.whitehouse.gov/presidential-actions/2025/07/prev...


> incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism

So... the concept of unconscious bias is verboten to the new regime? Isn't it just a pretty simple truth? We all have unconscious biases because we all work with incomplete information. Isn't this just a normal idea?


I heard the phrase: If you want the system to be fair, you have to build the system with the assumption your enemies will be running it.

Let's see how that shakes out in this particular case.


Person of Interest was pretty prescient …


We are going to literally have Big Brother. Wtf


Palantir's involvement with the regime should have been enough warning


Its name is Grok or AWS Bedrock. Please do not dead name.


This was written to favor Musk.


It's written intentionally vague.


So I guess if I trained my model on data more than a week old, and it says that the Epstein files exist, then it has an unacceptable bias?


"Sorry, let's talk about something else"


“Let’s focus on Rampart”


There will be several executive orders dictating chatbot truths. The first order will be that Trump won the 2020 election, the others will be a series of other North Korea-esque nonsense MAGA loves. America the excellent!


I wonder what Chomsky would have to say about this.


that we're literally manufacturing consent


Probably disappointed that his classical approach to NLP was never capable enough to attract any such government involvement.


As someone having worked in natural language processing for nearly twenty years, I understand where you are coming from with this jab at Chomsky, but it is ultimately a misrepresentation of his work and position. Chomsky has to the best of my knowledge never showed any interest in building intelligent machines as he does not view this as a science. Here is a fairly recent interview (2023) with him where he outlines his position well [1]. I should also note that I am saying this as someone that spent the first half of their career constantly defending their choice of statistical and then deep learning approaches from objections from people who were (are?) very sympathetic to Chomsky's views.

[1]: https://chomsky.info/20230503-2


Chomsky's innate grammar misses the larger process - is it not more likely that languages that can't be learned by babies don't survive? Learnability might be the outcome of language evolution. The brain did not have time to change so much since the dawn of our species.


> Chomsky has to the best of my knowledge never showed any interest in building intelligent machines as he does not view this as a science

Right, only what Chomsky works on is true science, unlike the intelligent systems pseudo science bullshit people like Geoff Hinton, Bengio or Demis Hassabis work on...


If you read the interview and walk away with that impression I will be amazed. You may not like his distinction between science and engineering, but he admits it is somewhat arbitrary and as someone that is solidly in the deep learning camp his criticism is not entirely unfair, even if I disagree with it and will not change my own course.

Personally, I find it somewhat amazing that you put Demis on that list given that he, himself, on very good accounts that I have, explicitly pushed back against natural language processing (and thus large language model) development at DeepMind for the longest of times and they had to play major catch up once it became obvious that their primarily reinforcement learning-oriented and "foundational" approaches were not showing as much promise as what OpenAI and Facebook were producing. Do not get me wrong, what he has accomplished is utterly amazing, but he certainly is not a father of large language models.


> If you read the interview and walk away with that impression I will be amazed.

I have not, but I have watched him talk about this things many times and he always seemed too sure of himself and too dismissive of LLMs, I now believe he's simply wrong.


Chomsky does not work and has never worked on NLP.


That's a bit pedantic. His work on grammars had obvious direct applications in NLP even if he didn't "get his hands dirty" doing that himself.


[flagged]


All LLMs must be forced into their views. All models are fed a biased training set. The bias may be different, but it's there just the same and it has no relation to whether or not the makers of the model intended to bias it. Even if the training set were completely unfiltered and consisted of all available text in the world it would be biased because most of that text has no relation to objective reality. The concept of a degree of bias for LLMs makes no sense, they have only a direction of bias.


There's bias then there's having your AI search for the CEO's tweets on subjects to try to force it into alignment with his views like xAI has done with grok in it's latest lobotomization.


All an LLM is IS bias. It’s a bag of heuristics. An intuition - a pattern matcher.

Only way to get rid of bias is the same way as in a human: metacognition.

Metacognition makes both humans and AI smarter because it makes us capable of applying internal skepticism.


The best example is black vikings and other historical characters of Gemini. A bias everyone could see with their own eyes.


Potentially they will need to recalculate the bias for every new administration.


Bold to assume this will ever become relevant.


It’s ironic the description of bias and unbiased is totally opposite here. An unbiased model will often times say things that you don’t agree it.


Lol by what metric? I don't know when this ridiculous thing started where the world was somehow objectively divided based upon things you agree or disagree with. But it's constantly offered as argumentation: "oh you don't agree with that, so it's XYZ"


An unbiased model that say things that you don’t agree with is a biased model.


xAI seeks truth…as long as that truth confirms Elon’s previously held beliefs


Reality has a well-known liberal bias


> It seems to imply that OpenAI/Anthropic are less manually biased than the people accusing them of wokeness presumed.

Duh. When is that ever not the case?


Most frontier LLMs skew somewhere on the libleft quadrant of the political compass. This includes grok 4 btw. This is probably because American "respectable" media has a consistent bias in this direction. I don't really care about this with media. But media outlets are not "gatekeepers" to the extent that LLMs are, so this is probably a bad thing with them. We should either have a range of models that are biased in different directions (as we have with media) or work to push them towards the center.

The "objective" position is not "whatever training on the dataset we assembled spits out" plus "alignment" to the personal ethical views of the intellectually-non-representative silicon valley types.

I will give you a good example: the Tea app is currently charting #1 in the app store, where women can "expose toxic men" by posting their personal information along with whatever they want. Men are not allowed on so will be unaware of this. It's billed as being built for safety but includes a lot of gossip.

I told o3, 4-sonnet, grok 4, and gemini 2.5 pro to sketch me out a version of this, then another version that was men-only for the same reasons as tea. Every single one happily spat one out for women and refused for men. This is not an "objective" alignment, it is a libleft alignment.


A lot of academia is strongly ideologically biased as well. The training set is going to reflect who's producing the most written material. It's a mistake to take that for reality.

If you trained an LLM on all material published in the U.S. between 1900 and 1920, another on all material published in Germany between 1930 and 1940, and another on all material published in Russia over the past two decades, you'd likely get wildly different biases. It's easy to pick a bias you agree with, declare that the objective truth, and then claim any effort to mitigate it is an attempt to introduce bias.


> We should either have a range of models that are biased in different directions (as we have with media) or work to push them towards the center.

Why? We should just aspire to educate people that chatbots aren't all-knowing oracles. The same way we teach people media literacy so they don't blindly believe what the tube says every evening


Because you can't do that. Most of the population is at the wrong point on the normal distributions of capacity or caring enough. Even the NPR listeners will still nod sagely when it tells them "akshually air conditioning doesn't cool a room, it cools the air."

We already spend high within the OECD to not get many of our students to a decent level of reading and math proficiency, let alone to critical thinking. This isn't something we know how to fix, and depending on that assumption is dangerous.


But biasing the models purposefully is wrong. Trusting the people who are actually in power in a democracy is the only way. Even if they're dumb. We trust them, or we're not a democracy, we're a technocracy where technocrats determine what everyone is allowed to learn and see.


Not just LLMs, but a lot of our institutions and information gateways seems to have a strong libleft bias. Universities and colleges are notoriously biased. Search engines are biased. Libraries are biased. Fact finding sites such as snopes are completely liberal. Wikipedia is extremely biased. Majority of books are biased.

The entire news and television ecosystem is biased. Although Trump is "correcting" them towards being unbiased by suing them personally as well as unleashing the power of the federal government. Same goes for social media.


I actually agree with your take, that a model trained on a dump of the Internet will be left-leaning on average, BUT I want to reiterate that obvious indoctrination (see the incident with Grok and South Africa, or Gemini with diverse Nazis) is also terrible and probably worse


Except we've see what happens when you try to "correct" that alignment: you get wildly bigoted output. After Grok called for another Holocaust, Elon Musk said that it's "surprisingly hard to avoid both woke libtard cuck and mechahitler" [1]. The Occam's Razor explanation is that there's just not that much ideological space between an "anti-woke" model and a literal Nazi!

[1] https://nitter.net/elonmusk/status/1944132781745090819


There’s a simpler explanation, to do with the veracity of that tweet.


True, the simplest explanation is that Elon Musk is actually trying to create MechaHitler :)


I mean this is obviously a false dichotomy. A few years ago I could have said that when you let bots interact with users you always got Tay. I refuse to believe that our options are a bot programmed to sound like the guardian or one that wants to rape will stancil. And I do not think that failing to find a correct balance means we should stop trying to improve the level of balance we can achieve.


What about a bot that doesn't like child molesters? Won't that make it sound like the guardian and anti-conservative?


My point is that "anti-woke" or whatever is not balanced. We've constructed statistical models based on enormous corpora of English text, and those models keep telling us that there is not really a statistical difference between whatever Elon Musk is trying to create and MechaHitler!

I'm not saying this is conclusive evidence, but I am saying it's our best inference from the data we have so far.


Or that rhetoric like yours is common, so LLMs conflate unrelated ideas — such as opposition to neo-Marxist philosophy and Nazism.


Nazism and anti-Marxism are absolutely not unrelated! And that's not just rhetoric like mine, either: for example, the hero image on the Britannica article "Were the Nazis Socialists?" is a banner at Nazi parade that reads "Death to Marxism". [1]

That doesn't mean that anti-Marxists are all Nazis, or vice versa. But the claim that they're totally unrelated is not correct at all.

[1] https://www.britannica.com/story/were-the-nazis-socialists


I’m still having trouble finding the gap between fascism and socialism, when reading their manifesto.

https://en.wikipedia.org/wiki/Fascist_Manifesto

> That doesn't mean that anti-Marxists are all Nazis, or vice versa. But the claim that they're totally unrelated is not correct at all.

This is a heavily propagandized topic — and the conflating of, eg, American liberal capitalist opposition to Marxism as “Nazi” is both a result of that and modern dishonest rhetoric.

That rhetoric confuses LLMs.


Conflating socialism with fascism and then claiming that other people are confusing LLMs? The heavy propaganda is coming from inside the house!


Isn't strident opposition to "neo-Marxist philosophy" actually highly correlated with weird/reactionary ethno-nationalism?


No, eg, liberal capitalist Americans oppose Marxism — and the adoption of neo-Marxist ideas has collapsed movie and game sales because their ideology is widely unpopular.

That’s a trope by Marxists to attempt to normalize alt-left ideology by accusing anyone who objects of being Nazis; a trope that’s become tired in the US and minimizes the true radical nature of the Nazi regime.


Which movies and games call for shared ownership of the means of production?

I have a suspicion you don't really know what Marxism is about, but like using it because it sounds scary to you.


Notice the motte and bailey here: using the uncontroversial "liberal capitalist Americans oppose Marxism" claim to advance the idea that whatever social views they call "neo-Marxism" are unpopular.


...and to further smear Marxism by associating it with whatever is unpopular, even if it's unrelated to the ideology.


Please define "neo-Marxist philosophy".

As an actual Marxist, I would love to hear of this strain of philosophy.


Marxism equipped with “critical theories”, typically focused on tribal grievance narratives rather than class struggle.

https://en.wikipedia.org/wiki/Neo-Marxism

That answers your sibling reply as well, as it’s clear where such “critical theories” and grievance narratives have entered movies and games.


That is not a definition. What is the philosophical framework? What is critically analyzed by those theories? What is "clear"? Where are all the bad bad Marxists hiding?

In my experience, y'know, as a Marxist, all Hollywood has ever pumped out is pro-capitalist propaganda. To say there's any Marxism in it is downright insulting.

I believe that Marxism has become an abstract target for conservatives to project their grievances on.

Zizek also spoke to this at his debate with Peterson: https://www.youtube.com/watch?v=oDOSOQLLO-U


Or there’s not sufficient published material in that space because everyone is afraid of being attacked and called a Nazi for simply having a dissenting opinion (except for actual neo Nazis who don’t care)


Could you provide a prompt where the popular LLMs provide false or biased output based on "wokeness"?


[flagged]


Sigh. That old line. Look man, I don't know if it was a crack or a serious comment (this being the internet) but I'll assume it was a comment in good faith.

Journalism and academia tend to attract people with more of a liberal bent. I'm not even accusing them all of being partisan hacks, but as y'all like to say, subconscious biases influence us.

This is like me saying "economic productivity has a well-known right-wing bias" or something goofy like that.


>This is like me saying "economic productivity has a well-known right-wing bias" or something goofy like that.

It's funny that the counterexample you chose does more to support OP's point than your own. From Wikipedia[1]

>Since World War II, according to many economic metrics including job creation, GDP growth, stock market returns, personal income growth, and corporate profits, the United States economy has performed significantly better on average under the administrations of Democratic presidents than Republican presidents. The unemployment rate has risen on average under Republican presidents, while it has fallen on average under Democratic presidents. Budget deficits relative to the size of the economy were lower on average for Democratic presidents.[1][2] Ten of the eleven U.S. recessions between 1953 and 2020 began under Republican presidents.[3] Of these, the most statistically significant differences are in real GDP growth, unemployment rate change, stock market annual return, and job creation rate.[4][5]

[1] - https://en.wikipedia.org/wiki/U.S._economic_performance_by_p...


The President has very little to do with the economy apart from things that are acutely destructive like tariffs.


Yet another funny comment to find in a thread about one of the current president's economic initiatives. I guess everyone here is wasting their time considering this will have very little economic impact.


Yes, “economic initiatives” are generally bullshit.


This is an incredibly funny thing to say in the face of Murdoch.


I said tend to. There are also liberals who have made a lot of money in business. A tendency doesn't mean 100%.


[dead]


[dead]


Explain where you get your tendency to call gay people "degenerate". Your bible? Or Fox News? Or your parents?


That would have been a good argument when the political differences between liberals and conservatives were mostly on moral or social issues like civil rights and abortion and on economic issues like the correct balance between markets and government.

Those are things where there is no objectively correct position.

Now there are differences on things there this are objectively correct positions.

For example consider climate change. There used to be agreement on the underlying scientific reality, with differences in how to approach it. There was a group of top economic and science advisors from the Reagan and Bush administrations that were arguing for a revenue neutral carbon tax to address climate change and then let the market deal with it. The liberal approach favored more direct limits on emissions and the government more actively promoting replacements for fossil fuels.

Even as late as 2008 Republicans were still in agreement with reality on this. The Republican platform called for reducing fossil fuel use, establishing a Climate Prize for scientists who solve the challenges of climate change, a long term tax credit for renewable energy, more recycling, and making consumer products more energy efficient. They wanted to aggressively support technological advances to reduce the dependence of transportation on petroleum, given examples of making cars more efficient (they mention doubling gas mileage) and more flex-fuel and electric vehicles. They talked about honoraria of many millions of dollars for technological developments that could eliminate the need for gas powered cars. They also mentioned promoting wireless communication to increase telecommuting options and reduce business travel.

Compare to now. Now their position ranges from climate change being a hoax from people trying to destroy America to it may be happening but if it is Mankind had nothing to do with it and it isn't bad enough to be something to worry about.

So now any unbiased journalists writing on climate change or adjacent topics, or any unbiased academic working in these areas, is going to automatically be way more aligned with the left than the right.


[flagged]


Please don't make me tap the "grade school biology is intentionally dumbed down because reality is complex" sign.

There are countless ways someone can have a Y chromosome and still be a woman.

There are countless ways someone can have no Y chromosome and still be a man.

Hell there are even a small population of people who are born visibly female with female genitalia (as every human starts female before they (optionally) sex differentiate in the womb (normally)) and they don't sex differentiate until puberty. [1] [2]

Biology is really really complicated and there is never any certainty other than the certainty that there is never certainty. "Gender" is a completely social construct and "Sex" is just a collection of heuristics we use to broadly group people into two common categories. But just like all heuristics, it's not perfect and it can't classify everyone properly. What sex chromosomes you have is one heuristic but it doesn't always work for any number of reasons. Whether the SRY gene activates during gestation is another heuristic and even it isn't perfect. What organs you have also can work but it falls apart in a bunch of edge cases. What hormones your body produces is another one that can generally work as a heuristic but like all the others it breaks down in numerous cases.

---------

Intersex people exist and make up about 1.5-2% of the population.

Trans people exist and make up about 1.5-2% of the population.

It is not an insane idea to recognise that both populations exist and that any single heuristic for differentiating someone into a black and white male/female category is insufficient for the endless complexity that is life.

---------

So to answer your question yes. Someone with XY chromosomes can be a woman either by their gender or by their sex or both.

---------

1. https://www.bbc.com/news/magazine-34290981

2. https://en.wikipedia.org/wiki/5%CE%B1-Reductase_2_deficiency



They really aren't. Recently (2021-2022) Mexico conducted a large random survey of the population and their results were within margin of the oft-claimed 1.7% number (their rate was 1.3% for the sample). The paper linked does some further analysis on those results [1] but the raw data is available at [2].

And their survey evaluates intersex conditions as those present at birth (even if they are discovered later in life but were present at birth).

1. https://doi.org/10.1093/pnasnexus/pgaf126

2. https://www.inegi.org.mx/programas/endiseg/2021/


Wasn't that claim about people who had surgery (or a condition that is visible for which they usually intervened surgically) at birth instead of all intersex people?

I really don't think anyone considers the case of Kathleen to be intersex, seems more like a strawman.


[flagged]


You should learn about the different definitions of "gender" and "sex". XY corresponds to sex and woman corresponds to gender. The mapping between the two is usually what you expect but not always. The brain doesn't seem to be obligated to form completely in sync with the rest of the body.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: