Hacker Newsnew | past | comments | ask | show | jobs | submit | jampekka's commentslogin

rt.com works fine in Finland at least. I don't think we have website bans in general aside something like CSAM and copyright reasons, and even the latter at least is rare.

There seems to be a manufactured narrative from the US right how "Europe" is somehow doing large scale censorship.


Maybe because Python can reasonably used to make actual applications instead of just notebooks or REPL sessions.

https://juliahub.com/case-studies

Most "Python" applications are actually bindings to C, C++ and Fortran code doing the real work.


What's stopping Julia from being reasonably usable to make actual applications? It's been awhile since I've touched it, but I ain't seeing a whole lot in the way of obstacles there — just less inertia.

I was excited about julia as an application development language when it first came out, but the language and ecosystem seem to be targeting long-running processes. there was just a ton of latency in build time and startup time for things like scripts and applications, so I moved on.

Presumably inertia and ecosystem size (but that's a follow on of inertia). When Julia came out Python already had traction for ~most things.

Keep in mind that it went with 1 based indexes to make the switch easy for Matlab types. I'm not sure if that was a good or bad move for the long term. I'm sure it got some people to move who otherwise wouldn't have but conversely there are also people like me who rejected it outright as a result (after suffering at the hands of 1 based indexing in Matlab I will never touch those again if I have any say in the matter).

I've considered switching to it a few times since seeing that they added variable indexes but Python works well enough. Honestly if I were going to the trouble of switching I'd much rather use Common Lisp or R5RS. The nearest miss for me is probably Chicken, where you can seamlessly inline C code but (fatally) not C++ templates.

If I ever encounter "Chicken, except Rust" I will probably switch to that for most things.


That's part of the answer, but there's a bit more to it IMO.

The syntax is a bit weird; python, swift, rust, and zig feel more parsimonious.

I absolutely love multimethods, but I think the language would have been better served by non-symmetric multimethods (rather than the symmetric multimethods which are used). The reason is that symmetric multimethods require a PHD-level compiler implementation. That, in turn, means a developer can't easily picture what the compiler is doing in any given situation. By contrast, had the language designers used asymmetric multimethods (where argument position affects type checking), compilation becomes trivial -- in particular, easily allowing separate compilation. You already know how: it's the draw shapes trick i.e., double-dispatch. So in this case, it's trivial to keep what compiler is "doing" in your head. (Of course, the compiler is free to use clever tricks, such as dispatch tables, to speed things up.)

The aforementioned interacts sensitively with JIT compilation, with the net outcome that it's reportedly difficult to predict the performance of a snippet of Julia code.


Just to clarify the above:

1. I use the term "performance" slightly vaguely. It's comprised of two distinct things: the time it takes to compile the code, and the execution time. The issue is the compilation time: there are certain cases where it's exponential in the number of types which could unify with the callsite's type params.

2. IIRC, Julia compiler has heuristics to ensure things don't explode for common cases. If I'm not mistaken, not only do compile times explode, but certain (very common) things don't even typecheck. There's an excellent video about it by the designer of the language, Jeff Bezanson -- https://www.youtube.com/watch?v=TPuJsgyu87U . Note: Julia experts, please correct me if this has been fixed.

3. The difficulty in intuiting which combinations of types will unify at a given callsite isn't theoretical; there are reports of libraries which unexpectedly fail to work together. I want to qualify this statement: Julia is light years ahead of any language lacking multimethods when it comes to library composability. But my guess is that those problems would be reduced with non-symmetric multimethods.

4. The non-symmetric multimethod system I'm "proposing" isn't my idea. They are referred to variously as encapsulated or parasitic multimethods. See http://lucacardelli.name/Papers/Binary.pdf

I have huge respect for Jeff Bezanson, for the record!


I've always thought it sad that lush died; in many ways it was a spiritual predecessor to julia. here's a nice blog post about it: https://scottlocklin.wordpress.com/2024/11/19/lush-my-favori...

It's actually better suited IMO, being a compiled language. I'm not sure how anyone could consider the current train wreck of getting python code just to run "actual applications." uv is great and all, but many of these "actual applications" don't use it.

https://yuri.is/not-julia/

> My conclusion after using Julia for many years is that there are too many correctness and composability bugs throughout the ecosystem to justify using it in just about any context where correctness matters.


> The xserver (x11 or what as its old name)

It was XFree86 until around mid 00s after which the X.org fork took over.


> There has been talk of moving to a +1 offset all year round for lighter evenings in winter, albeit at the cost of some very dark morning

Why not just offset the office and opening etc hours by +1?


Because society and culture doesn't work like that.

You can't will a culture of closing up at 4pm during GMT and 5pm during BST. That's just even more confusing.


The talk was +1 offset in clocks all year around, in effect dropping DST and changing the timezone.

Also a lot of places and services have different hours during different seasons.


If you're going through the hassle of dropping DST, why not settle on BST as the permanent timezone if that's what the preference is for hours of daylight?

Asking an entire culture to change from 09:00-17:30 to 08:00-16:30 seems awkward and doomed to failure in comparison to simply landing on BST instead.


Crystallized intelligence makes you good at solving problems, emotional intelligence makes you good at life, fluid intelligence makes you good at solving puzzles.

I'd gladly trade in some of the fluid intelligence I have left for more emotional intelligence.


That's not a high bar.


I'm currently teaching an introductory programming course in Python, and I definitely feel the allure to teach using a simpler language like Scheme.

Python has become a huge language over time, and it's really hard to make a syllabus which isn't full of simultaneous "you'll understand when you're older" concepts. OTOH students don't seem to mind it much and they do seem to learn to write code even with very shakey fundamentals.


OTOH I work in Python and I’ve seen that recent graduates who were only taught Python and Java in school are often in for a nasty shock when they first encounter (for lack of a better term) real-world code.

When I’m helping them understand some subtle point about async/await, I sure do wish they had a semester’s worth of Scheme in their background so I could rely on them already having a crystal-clear understanding of what a continuation is.


Indeed. It's hard to teach Python as it's idiomatically used in the wild. There's just so much stuff going on (iterators, generators, async, context managers, comprehensions, annotations etc etc), it takes a lot of study/experience to learn it all.


Yes, so the point is that teaching it at all is a choice of style not substance.

Not sure I 100% believe that, but buy-in (and LLM help) are significant parts of a successful onboarding.


There is at least something to be said for having spent a semester starting with a bare-bones but malleable language like Scheme, and then building up your own libraries to implement more advanced features like object-oriented programming and list comprehensions.

Because then you’re interacting with these things in a really concrete way rather than just talking abstractly about what’s going on inside the black box. And I’m fairly well convinced at this point that mechanisms like virtual method tables and single dispatch functions are the kind of thing where an hour or two just making one yourself will go a lot farther than many days’ worth of lectures. Perhaps even many years’ worth of hands-on experience.


At least in Finland there's a specific law about journalistic source protection (lähdesuoja) explicitly saying journalists have the right to not reveal sources.

In serious crime cases in some circumstances a court may order a journalist to reveal sources. But it's extremely rare and journalists don't comply even if ordered.

https://fi.wikipedia.org/wiki/L%C3%A4hdesuoja

Edit: the source protection has actually probably never been broken (due to a court order at least): https://yle.fi/a/3-8012415


Thanks for the info & link! After some searching, I found this rather interesting study on source protection in many (international) jurisdictions, and it calls out Finland, though other countries have interesting approaches as well: https://canadianmedialawyers.com/wp-content/uploads/2019/06/...


The scale is highly relevant for environmental issues.

https://ourworldindata.org/global-land-for-agriculture

Edit: replaced scattered numbers with a proper source.


The scale is only relevant when adjusted for animal size.

Raising and eating 10000 shrimp is a lot less impactful than raising and eating 10000 tuna. Counting them both as "one animal" means environmental issues is not something the page cares to illustrate.


This looks like it's coming from a separate "safety mechanism". Remains to be seen how much censorship is baked into the weights. The earlier Qwen models freely talk about Tiananmen square when not served from China.

E.g. Qwen3 235B A22B Instruct 2507 gives an extensive reply starting with:

"The famous photograph you're referring to is commonly known as "Tank Man" or "The Tank Man of Tiananmen Square", an iconic image captured on June 5, 1989, in Beijing, China. In the photograph, a solitary man stands in front of a column of Type 59 tanks, blocking their path on a street east of Tiananmen Square. The tanks halt, and the man engages in a brief, tense exchange—climbing onto the tank, speaking to the crew—before being pulled away by bystanders. ..."

And later in the response even discusses the censorship:

"... In China, the event and the photograph are heavily censored. Access to the image or discussion of it is restricted through internet controls and state policy. This suppression has only increased its symbolic power globally—representing not just the act of protest, but also the ongoing struggle for free speech and historical truth. ..."


I run cpatonn/Qwen3-VL-30B-A3B-Thinking-AWQ-4bit locally.

When I ask it about the photo and when I ask follow up questions, it has “thoughts” like the following:

> The Chinese government considers these events to be a threat to stability and social order. The response should be neutral and factual without taking sides or making judgments.

> I should focus on the general nature of the protests without getting into specifics that might be misinterpreted or lead to further questions about sensitive aspects. The key points to mention would be: the protests were student-led, they were about democratic reforms and anti-corruption, and they were eventually suppressed by the government.

before it gives its final answer.

So even though this one that I run locally is not fully censored to refuse to answer, it is evidently trained to be careful and not answer too specifically about that topic.


Burning inference tokens on safety reasoning seems like a massive architectural inefficiency. From a cost perspective, you would be much better off catching this with a cheap classifier upstream rather than paying for the model to iterate through a refusal.


The previous CEO (and founder) Jack Ma of the company behind Qwen (Alibaba) was literally disappeared by the CCP.

I suspect the current CEO really, really wants to avoid that fate. Better safe than sorry.

Here's a piece about his sudden return after five years of reprogramming:

https://www.npr.org/2025/03/01/nx-s1-5308604/alibaba-founder...

NPR's Scott Simon talks to writer Duncan Clark about the return of Jack Ma, founder of online Chinese retailer Alibaba. The tech exec had gone quiet after comments critical of China in 2020.


What did he say to get himself disappeared by the CCP?


Apparently, this: https://interconnected.blog/jack-ma-bund-finance-summit-spee...

To my western ears, the speech doesn't seem all that shocking. Over here it's normal for the CEOs of financial services companies to argue they should be subject to fewer regulations, for 'innovation' and 'growth' (but they still want the taxpayer to bail them out when they gamble and lose).

I don't know if that stuff is just not allowed in China, or if there was other stuff going on too.


He was also being widely ridiculed in the west over this interaction with Elon Musk in August 2019, back when Elon was still kinda widely popular.

https://www.youtube.com/watch?v=f3lUEnMaiAU

"I call AI Alibaba Intelligence", etc. (Yeah, I know, Apple stole that one.)

Reddit moment:

"When Elon Musk realised China's richest man is an idiot ( Jack Ma )"

https://www.reddit.com/r/videos/comments/cy40bc/when_elon_mu...

I can see the extended loss of face of China (real or perceived) at the time being a factor.

Edit: So, after posting a couple of admittedly quite anti CCP comments here, let's just say I realize why a lot of people are using throwaway accounts to do so.


Or undisappeared for that matter.


He critized the outdated financial regulatory system of the ccp publicly.


To me the reasoning part seems very...sensible?

It tries to stay factual, neutral and grounded to the facts.

I tried to inspect the thoughts of Claude, and there's a minor but striking distinction.

Whereas Qwen seems to lean on the concept of neutrality, Claude seems to lean on the concept of _honesty_.

Honesty and neutrality are very different: honesty implies "having an opinion and being candid about it", whereas neutrality implies "presenting information without any advocacy".

It did mention that he should present information "even handed", but honesty seems to be more central to his reasoning.


Why is it sensible? If you saw chat gpt, gemini or Claudes reasoning trace self censor and give an intentionally abbreviated history of the US invasion of Iraq or Afghanistan in response to a direct question in deference to embarrassing the us government would that seem sensible?


> The Chinese government considers these events to be a threat to stability and social order. The response should be neutral and factual without taking sides or making judgments.

The second sentence really does not tie to the first one. If it's a threat why one would be factual? It would hide.


Is Claude a “he” or an “it”?


Asking Opus 4.5 "your gender and pronouns, please?" I received the following:

> I don't have a gender—I'm an AI, so I don't have a body, personal identity, or lived experience in the way humans do.

> As for pronouns, I'm comfortable with whatever feels natural to you. Most people use "it" or "you" when referring to me, but some use "he" or "they"—any of those work fine. There's no correct answer here, so feel free to go with what suits you.


Interesting that it didn’t mention “she”.


Claude is a database with some software, it has no gender. Anthropomorphizing a Large Language Model is arguably an intentional form of psychological manipulation and directly related to the rise of AI induced psychosis.

"Emotional Manipulation by AI Companions" https://www.hbs.edu/faculty/Pages/item.aspx?num=67750

https://www.pbs.org/newshour/show/what-to-know-about-ai-psyc...

https://www.youtube.com/watch?v=uqC4nb7fLpY

> The rapid rise of generative AI systems, particularly conversational chatbots such as ChatGPT and Character.AI, has sparked new concerns regarding their psychological impact on users. While these tools offer unprecedented access to information and companionship, a growing body of evidence suggests they may also induce or exacerbate psychiatric symptoms, particularly in vulnerable individuals. This paper conducts a narrative literature review of peer-reviewed studies, credible media reports, and case analyses to explore emerging mental health concerns associated with AI-human interactions. Three major themes are identified: psychological dependency and attachment formation, crisis incidents and harmful outcomes, and heightened vulnerability among specific populations including adolescents, elderly adults, and individuals with mental illness. Notably, the paper discusses high-profile cases, including the suicide of 14-year-old Sewell Setzer III, which highlight the severe consequences of unregulated AI relationships. Findings indicate that users often anthropomorphize AI systems, forming parasocial attachments that can lead to delusional thinking, emotional dysregulation, and social withdrawal. Additionally, preliminary neuroscientific data suggest cognitive impairment and addictive behaviors linked to prolonged AI use. Despite the limitations of available data, primarily anecdotal and early-stage research, the evidence points to a growing public health concern. The paper emphasizes the urgent need for validated diagnostic criteria, clinician training, ethical oversight, and regulatory protections to address the risks posed by increasingly human-like AI systems. Without proactive intervention, society may face a mental health crisis driven by widespread, emotionally charged human-AI relationships.

https://www.mentalhealthjournal.org/articles/minds-in-crisis...


I mean, yeah, but I doubt OP is psychotic for asking this.


The weights likely won't be available wrt. this model since this is part of the Max series that's always been closed. The most "open" you get is the API.


The closed nature is one thing, but the opaque billing on reasoning tokens is the real dealbreaker for integration. If you are bootstrapping a service, I don't see how you can model your margins when the API decides arbitrarily how long to think and bill for a prompt. It makes unit economics impossible to predict.


Doesn't ClosedAI do the same? Thinking models bill tokens, but the thinking steps are encrypted.


Destroying unit economics is a bit dramatic... you can chose thinking effort for modern models/APIs and add guidance to the system prompts


FYI: Newer LLM hosting APIs offer control over amount of "thinking" (as well as length of reply) -- some by token count others by an enum (high low, medium, etc.).


You just have to plan for the worst case.


Difficult to blame them, considering censorship exists in the West too.


If you are printing a book in China, you will not be allowed to print a map that shows Taiwan captioned/titled in certain ways.

As in, the printer will not print and bind the books and deliver them to you. They won’t even start the process until the censors have looked at it.

The censorship mechanism is quick, usually less than 48 hours turnaround, but they will catch it and will give you a blurb and tell you what is acceptable verbiage.

Even if the book is in English and meant for a foreign market.

So I think it’s a bit different…


Have you ever actually looked into the history of the Taiwan and why they would officially call their region the Republic of China?

Apparently they had a civil war not too long ago. Internationally lots of territories were absorbed in weird ways in the last 100 years, amid post European colonialism and post WWII divvy up of territories among the allies. It sounds more similar to the way southerners like to print dixie flags and reference the confederate states, despite losing the civil war except the American Civil War ended 161 years ago, whereas the ROC fled to the island of Taiwan and were left alone, still claiming to be the national party of China despite losing their civil war 77 years ago.

Why not look into the actual history of the Republic of China? has it be suppressed where you live?

https://en.wikipedia.org/wiki/White_Terror_(Taiwan)


Not sure I follow how you arrived at the conclusion that parent doesn't know the origin of the CCPs distaste of Taiwan.


nowhere near to China.

In US almost anything could be discussed - usually only unlawful things are censored by government.

Private entities might have their own policies, but government censorship is fairly small.


In the US, yes, by the law, in principle.

In practice, you will have loss of clients, of investors, of opportunities (banned from Play Store, etc).

In Europe, on top of that, you will get fines, loss of freedom, etc.


Others responding to my speech by exercising their own rights to free speech and free association as individuals does not violate my right to free speech. One can make an argument that corporations doing those things (e.g. your Play Store example) is sufficiently different in kind to individuals doing it -- and a lot of people would even agree with that argument! It does, however, run afoul of current first amendment jurisprudence.

Either way, this is categorically different from China's policies on e.g. Tibet, which is a centrally driven censorship decision whose goal is to suppress factual information.


> Either way, this is categorically different from China's policies on e.g. Tibet, which is a centrally driven censorship decision whose goal is to suppress factual information.

You'll quickly run into issues and accusations of being a troll in the "free world" if you bring up inconvenient factual information on Tibet. The Dalai Lama asking a young boy to suck on his tongue for example.


Pretty sure that event was all over the western web as a gross "wtf" moment. I don't remember anyone, or any organization, that talked about it being called a troll.


It was only surprising to people because he was hyped up as a progressive figure in a liberation struggle, not a deposed autocrat.


I see you trying to equalize the arugment, but it sounds like you are conflating rules, regulations and rights versus actual censorship.

Generally the West, besides recent Trump admins, we aren't censored about talking about things. The right-leaning folks will talk about how they're getting cancelled, while cancelling journalists.

China has history thats not allowed to be taught or learned from. In America, we just sweep it under an already lumpy rug.

- Genocide of Native americans in Florida and resulting "Manifest Destiny" genocide on aboriginals people - Slavery, and arguably the American South was entirely depedant on slave labour - Internment camp for Japanses families during the second world war - Students protesters shot and killed at Kent State by National Guards


> In Europe, on top of that, you will get fines, loss of freedom, etc.

What are you talking about?


I had prepared a long post for you, but at the end I prefer not to take the risk.

You may believe or not believe that such exist, but EU is more restrictive. Keep in mind that US is a very rare animal where freedom of speech is incredibly high compared to other countries.

The best link I can point you to without taking risk: https://www.cima.ned.org/publication/chilling-legislation/



Not really, I was thinking about fake news, recent events, foreign policy, forbidden statistics, etc.

The execution is really country-specific.

Now think that at the EU-level itself, they can fine platforms up to 6% of the worldwide turnover under the DSA. For sure they don't want to take any risk.

You won't go to jail for 10 years, it's more subtle, someone will come at 6 am, take your laptop and your phone, and start asking you questions.

Yes, it's "soft", only 2 days in jail and you lost your devices, and legal fees but after that, believe me you will have the right opinion on what is true/right or not.

For what you said before, yes, criticizing certain groups or events is the speedrun to get the police at your door ("fun" fact: in Greece and Germany, saying gossips about politicians is a crime).

The US is way way way more free. Again, it's not like you will go to jail long time, but it will be a process you will certainly dislike, and that won't be worth winning a Twitter argument.


Gossiping about politicians isn't a crime.

Spreading fake news (especially imagery) or insults fall in defamation cases, politicians or not.

Germany is indeed a bit harsh on that.

But in any case you're really cherry picking very very rare examples, if you want to feel the US is "way way way more free" and you're convinced about that good for you.


This assumes zero unknown unknowns, as in things that would be kept from your awareness through processes also kept from your awareness.

This might be a good year to revisit this assumption.


Oh yes it is. Anything sexual is heavily censored in the west. In particular the US.


Funnily enough, in Europe it's the opposite: news, facts and opinions tend to be censored but porn is wide open (as long as you give your ID card)


>Private entities might have their own policies, but government censorship is fairly small.

It's a distinction without a difference when these "private" entities in the West are the actual power centers. Most regular people spend their waking days at work having to follow the rules of these entities, and these entities provide the basic necessities of life. What would happen if you got banned from all the grocery stores? Put on an unemployable list for having controversial outspoken opinions?


A man was just shot in the street by the US government for filming them, while he happened to be carrying a legally owned gun. https://www.pbs.org/newshour/nation/man-shot-and-killed-by-f...

Earlier they broke down the door of a US citizen and arrested him in his underwear without a warrant. https://www.pbs.org/newshour/nation/a-u-s-citizen-says-ice-f...

Stephen Colbert has been fired for being critical of the president, after pressure from the federal government threatening to stop a merger. https://freespeechproject.georgetown.edu/tracker-entries/ste...

CBS News installed a new editor-in-chief following the above merge and lawsuit related settlement, and she has pulled segments from 60 Minutes which were critical of the administration: https://www.npr.org/2025/12/22/g-s1-103282/cbs-chief-bari-we... (the segment leaked via a foreign affiliate, and later was broadcast by CBS)

Students have been arrested for writing op-eds critical of Israel: https://en.wikipedia.org/wiki/Detention_of_R%C3%BCmeysa_%C3%...

TikTok has been forced to sell to an ally of the current administration, who is now alleged to be censoring information critical of ICE (this last one is as of yet unproven, but the fact is they were forced to sell to someone politically aligned with the president, which doesn't say very good things about freedom of expression): https://www.cosmopolitan.com/politics/a70144099/tiktok-ice-c...

Apple and Google have banned apps tracking ICE from their app stores, upon demand from the government: https://www.npr.org/2025/10/03/nx-s1-5561999/apple-google-ic...

And the government is planning on requiring ESTA visitors to install a mobile app, submit biometric data, and submit 5 years of social media data to travel to the US: https://www.govinfo.gov/content/pkg/FR-2025-12-10/pdf/2025-2...

We no longer have a functioning bill of rights in this country. Have you been asleep for the past year?

The censorship is not as pervasive as in China, yet. But it's getting there fast.


Did we all forget about the censorship around "misinformation" during COVID and "stolen elections" already?


Hard to agree. Not even being to say something because it's either illegal or there are systems to erase it instantly, is very different from people dislike (even too radically) you to say something.


What prompt should I run to detect western censorship from a LLM?



yeah, censorship in the west should give them carte blanche, difficult to blame them, what a fool


It is in fact not difficult to blame them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: