Hacker Newsnew | past | comments | ask | show | jobs | submit | jampekka's commentslogin

That's not a high bar.

I'm currently teaching an introductory programming course in Python, and I definitely feel the allure to teach using a simpler language like Scheme.

Python has become a huge language over time, and it's really hard to make a syllabus which isn't full of simultaneous "you'll understand when you're older" concepts. OTOH students don't seem to mind it much and they do seem to learn to write code even with very shakey fundamentals.


OTOH I work in Python and I’ve seen that recent graduates who were only taught Python and Java in school are often in for a nasty shock when they first encounter (for lack of a better term) real-world code.

When I’m helping them understand some subtle point about async/await, I sure do wish they had a semester’s worth of Scheme in their background so I could rely on them already having a crystal-clear understanding of what a continuation is.


Indeed. It's hard to teach Python as it's idiomatically used in the wild. There's just so much stuff going on (iterators, generators, async, context managers, comprehensions, annotations etc etc), it takes a lot of study/experience to learn it all.

Yes, so the point is that teaching it at all is a choice of style not substance.

Not sure I 100% believe that, but buy-in (and LLM help) are significant parts of a successful onboarding.


There is at least something to be said for having spent a semester starting with a bare-bones but malleable language like Scheme, and then building up your own libraries to implement more advanced features like object-oriented programming and list comprehensions.

Because then you’re interacting with these things in a really concrete way rather than just talking abstractly about what’s going on inside the black box. And I’m fairly well convinced at this point that mechanisms like virtual method tables and single dispatch functions are the kind of thing where an hour or two just making one yourself will go a lot farther than many days’ worth of lectures. Perhaps even many years’ worth of hands-on experience.


At least in Finland there's a specific law about journalistic source protection (lähdesuoja) explicitly saying journalists have the right to not reveal sources.

In serious crime cases in some circumstances a court may order a journalist to reveal sources. But it's extremely rare and journalists don't comply even if ordered.

https://fi.wikipedia.org/wiki/L%C3%A4hdesuoja

Edit: the source protection has actually probably never been broken (due to a court order at least): https://yle.fi/a/3-8012415


Thanks for the info & link! After some searching, I found this rather interesting study on source protection in many (international) jurisdictions, and it calls out Finland, though other countries have interesting approaches as well: https://canadianmedialawyers.com/wp-content/uploads/2019/06/...

The scale is highly relevant for environmental issues.

https://ourworldindata.org/global-land-for-agriculture

Edit: replaced scattered numbers with a proper source.


The scale is only relevant when adjusted for animal size.

Raising and eating 10000 shrimp is a lot less impactful than raising and eating 10000 tuna. Counting them both as "one animal" means environmental issues is not something the page cares to illustrate.


This looks like it's coming from a separate "safety mechanism". Remains to be seen how much censorship is baked into the weights. The earlier Qwen models freely talk about Tiananmen square when not served from China.

E.g. Qwen3 235B A22B Instruct 2507 gives an extensive reply starting with:

"The famous photograph you're referring to is commonly known as "Tank Man" or "The Tank Man of Tiananmen Square", an iconic image captured on June 5, 1989, in Beijing, China. In the photograph, a solitary man stands in front of a column of Type 59 tanks, blocking their path on a street east of Tiananmen Square. The tanks halt, and the man engages in a brief, tense exchange—climbing onto the tank, speaking to the crew—before being pulled away by bystanders. ..."

And later in the response even discusses the censorship:

"... In China, the event and the photograph are heavily censored. Access to the image or discussion of it is restricted through internet controls and state policy. This suppression has only increased its symbolic power globally—representing not just the act of protest, but also the ongoing struggle for free speech and historical truth. ..."


I run cpatonn/Qwen3-VL-30B-A3B-Thinking-AWQ-4bit locally.

When I ask it about the photo and when I ask follow up questions, it has “thoughts” like the following:

> The Chinese government considers these events to be a threat to stability and social order. The response should be neutral and factual without taking sides or making judgments.

> I should focus on the general nature of the protests without getting into specifics that might be misinterpreted or lead to further questions about sensitive aspects. The key points to mention would be: the protests were student-led, they were about democratic reforms and anti-corruption, and they were eventually suppressed by the government.

before it gives its final answer.

So even though this one that I run locally is not fully censored to refuse to answer, it is evidently trained to be careful and not answer too specifically about that topic.


Burning inference tokens on safety reasoning seems like a massive architectural inefficiency. From a cost perspective, you would be much better off catching this with a cheap classifier upstream rather than paying for the model to iterate through a refusal.

The previous CEO (and founder) Jack Ma of the company behind Qwen (Alibaba) was literally disappeared by the CCP.

I suspect the current CEO really, really wants to avoid that fate. Better safe than sorry.

Here's a piece about his sudden return after five years of reprogramming:

https://www.npr.org/2025/03/01/nx-s1-5308604/alibaba-founder...

NPR's Scott Simon talks to writer Duncan Clark about the return of Jack Ma, founder of online Chinese retailer Alibaba. The tech exec had gone quiet after comments critical of China in 2020.


What did he say to get himself disappeared by the CCP?

Apparently, this: https://interconnected.blog/jack-ma-bund-finance-summit-spee...

To my western ears, the speech doesn't seem all that shocking. Over here it's normal for the CEOs of financial services companies to argue they should be subject to fewer regulations, for 'innovation' and 'growth' (but they still want the taxpayer to bail them out when they gamble and lose).

I don't know if that stuff is just not allowed in China, or if there was other stuff going on too.


He was also being widely ridiculed in the west over this interaction with Elon Musk in August 2019, back when Elon was still kinda widely popular.

https://www.youtube.com/watch?v=f3lUEnMaiAU

"I call AI Alibaba Intelligence", etc. (Yeah, I know, Apple stole that one.)

Reddit moment:

"When Elon Musk realised China's richest man is an idiot ( Jack Ma )"

https://www.reddit.com/r/videos/comments/cy40bc/when_elon_mu...

I can see the extended loss of face of China (real or perceived) at the time being a factor.

Edit: So, after posting a couple of admittedly quite anti CCP comments here, let's just say I realize why a lot of people are using throwaway accounts to do so.


Or undisappeared for that matter.

He critized the outdated financial regulatory system of the ccp publicly.

To me the reasoning part seems very...sensible?

It tries to stay factual, neutral and grounded to the facts.

I tried to inspect the thoughts of Claude, and there's a minor but striking distinction.

Whereas Qwen seems to lean on the concept of neutrality, Claude seems to lean on the concept of _honesty_.

Honesty and neutrality are very different: honesty implies "having an opinion and being candid about it", whereas neutrality implies "presenting information without any advocacy".

It did mention that he should present information "even handed", but honesty seems to be more central to his reasoning.


Why is it sensible? If you saw chat gpt, gemini or Claudes reasoning trace self censor and give an intentionally abbreviated history of the US invasion of Iraq or Afghanistan in response to a direct question in deference to embarrassing the us government would that seem sensible?

> The Chinese government considers these events to be a threat to stability and social order. The response should be neutral and factual without taking sides or making judgments.

The second sentence really does not tie to the first one. If it's a threat why one would be factual? It would hide.


Is Claude a “he” or an “it”?

Asking Opus 4.5 "your gender and pronouns, please?" I received the following:

> I don't have a gender—I'm an AI, so I don't have a body, personal identity, or lived experience in the way humans do.

> As for pronouns, I'm comfortable with whatever feels natural to you. Most people use "it" or "you" when referring to me, but some use "he" or "they"—any of those work fine. There's no correct answer here, so feel free to go with what suits you.


Interesting that it didn’t mention “she”.

Claude is a database with some software, it has no gender. Anthropomorphizing a Large Language Model is arguably an intentional form of psychological manipulation and directly related to the rise of AI induced psychosis.

"Emotional Manipulation by AI Companions" https://www.hbs.edu/faculty/Pages/item.aspx?num=67750

https://www.pbs.org/newshour/show/what-to-know-about-ai-psyc...

https://www.youtube.com/watch?v=uqC4nb7fLpY

> The rapid rise of generative AI systems, particularly conversational chatbots such as ChatGPT and Character.AI, has sparked new concerns regarding their psychological impact on users. While these tools offer unprecedented access to information and companionship, a growing body of evidence suggests they may also induce or exacerbate psychiatric symptoms, particularly in vulnerable individuals. This paper conducts a narrative literature review of peer-reviewed studies, credible media reports, and case analyses to explore emerging mental health concerns associated with AI-human interactions. Three major themes are identified: psychological dependency and attachment formation, crisis incidents and harmful outcomes, and heightened vulnerability among specific populations including adolescents, elderly adults, and individuals with mental illness. Notably, the paper discusses high-profile cases, including the suicide of 14-year-old Sewell Setzer III, which highlight the severe consequences of unregulated AI relationships. Findings indicate that users often anthropomorphize AI systems, forming parasocial attachments that can lead to delusional thinking, emotional dysregulation, and social withdrawal. Additionally, preliminary neuroscientific data suggest cognitive impairment and addictive behaviors linked to prolonged AI use. Despite the limitations of available data, primarily anecdotal and early-stage research, the evidence points to a growing public health concern. The paper emphasizes the urgent need for validated diagnostic criteria, clinician training, ethical oversight, and regulatory protections to address the risks posed by increasingly human-like AI systems. Without proactive intervention, society may face a mental health crisis driven by widespread, emotionally charged human-AI relationships.

https://www.mentalhealthjournal.org/articles/minds-in-crisis...


I mean, yeah, but I doubt OP is psychotic for asking this.

The weights likely won't be available wrt. this model since this is part of the Max series that's always been closed. The most "open" you get is the API.

The closed nature is one thing, but the opaque billing on reasoning tokens is the real dealbreaker for integration. If you are bootstrapping a service, I don't see how you can model your margins when the API decides arbitrarily how long to think and bill for a prompt. It makes unit economics impossible to predict.

Doesn't ClosedAI do the same? Thinking models bill tokens, but the thinking steps are encrypted.

Destroying unit economics is a bit dramatic... you can chose thinking effort for modern models/APIs and add guidance to the system prompts

FYI: Newer LLM hosting APIs offer control over amount of "thinking" (as well as length of reply) -- some by token count others by an enum (high low, medium, etc.).

You just have to plan for the worst case.

Difficult to blame them, considering censorship exists in the West too.

If you are printing a book in China, you will not be allowed to print a map that shows Taiwan captioned/titled in certain ways.

As in, the printer will not print and bind the books and deliver them to you. They won’t even start the process until the censors have looked at it.

The censorship mechanism is quick, usually less than 48 hours turnaround, but they will catch it and will give you a blurb and tell you what is acceptable verbiage.

Even if the book is in English and meant for a foreign market.

So I think it’s a bit different…


Have you ever actually looked into the history of the Taiwan and why they would officially call their region the Republic of China?

Apparently they had a civil war not too long ago. Internationally lots of territories were absorbed in weird ways in the last 100 years, amid post European colonialism and post WWII divvy up of territories among the allies. It sounds more similar to the way southerners like to print dixie flags and reference the confederate states, despite losing the civil war except the American Civil War ended 161 years ago, whereas the ROC fled to the island of Taiwan and were left alone, still claiming to be the national party of China despite losing their civil war 77 years ago.

Why not look into the actual history of the Republic of China? has it be suppressed where you live?

https://en.wikipedia.org/wiki/White_Terror_(Taiwan)


Not sure I follow how you arrived at the conclusion that parent doesn't know the origin of the CCPs distaste of Taiwan.

nowhere near to China.

In US almost anything could be discussed - usually only unlawful things are censored by government.

Private entities might have their own policies, but government censorship is fairly small.


In the US, yes, by the law, in principle.

In practice, you will have loss of clients, of investors, of opportunities (banned from Play Store, etc).

In Europe, on top of that, you will get fines, loss of freedom, etc.


Others responding to my speech by exercising their own rights to free speech and free association as individuals does not violate my right to free speech. One can make an argument that corporations doing those things (e.g. your Play Store example) is sufficiently different in kind to individuals doing it -- and a lot of people would even agree with that argument! It does, however, run afoul of current first amendment jurisprudence.

Either way, this is categorically different from China's policies on e.g. Tibet, which is a centrally driven censorship decision whose goal is to suppress factual information.


> Either way, this is categorically different from China's policies on e.g. Tibet, which is a centrally driven censorship decision whose goal is to suppress factual information.

You'll quickly run into issues and accusations of being a troll in the "free world" if you bring up inconvenient factual information on Tibet. The Dalai Lama asking a young boy to suck on his tongue for example.


Pretty sure that event was all over the western web as a gross "wtf" moment. I don't remember anyone, or any organization, that talked about it being called a troll.

It was only surprising to people because he was hyped up as a progressive figure in a liberation struggle, not a deposed autocrat.

I see you trying to equalize the arugment, but it sounds like you are conflating rules, regulations and rights versus actual censorship.

Generally the West, besides recent Trump admins, we aren't censored about talking about things. The right-leaning folks will talk about how they're getting cancelled, while cancelling journalists.

China has history thats not allowed to be taught or learned from. In America, we just sweep it under an already lumpy rug.

- Genocide of Native americans in Florida and resulting "Manifest Destiny" genocide on aboriginals people - Slavery, and arguably the American South was entirely depedant on slave labour - Internment camp for Japanses families during the second world war - Students protesters shot and killed at Kent State by National Guards


> In Europe, on top of that, you will get fines, loss of freedom, etc.

What are you talking about?


I had prepared a long post for you, but at the end I prefer not to take the risk.

You may believe or not believe that such exist, but EU is more restrictive. Keep in mind that US is a very rare animal where freedom of speech is incredibly high compared to other countries.

The best link I can point you to without taking risk: https://www.cima.ned.org/publication/chilling-legislation/



Not really, I was thinking about fake news, recent events, foreign policy, forbidden statistics, etc.

The execution is really country-specific.

Now think that at the EU-level itself, they can fine platforms up to 6% of the worldwide turnover under the DSA. For sure they don't want to take any risk.

You won't go to jail for 10 years, it's more subtle, someone will come at 6 am, take your laptop and your phone, and start asking you questions.

Yes, it's "soft", only 2 days in jail and you lost your devices, and legal fees but after that, believe me you will have the right opinion on what is true/right or not.

For what you said before, yes, criticizing certain groups or events is the speedrun to get the police at your door ("fun" fact: in Greece and Germany, saying gossips about politicians is a crime).

The US is way way way more free. Again, it's not like you will go to jail long time, but it will be a process you will certainly dislike, and that won't be worth winning a Twitter argument.


Gossiping about politicians isn't a crime.

Spreading fake news (especially imagery) or insults fall in defamation cases, politicians or not.

Germany is indeed a bit harsh on that.

But in any case you're really cherry picking very very rare examples, if you want to feel the US is "way way way more free" and you're convinced about that good for you.


This assumes zero unknown unknowns, as in things that would be kept from your awareness through processes also kept from your awareness.

This might be a good year to revisit this assumption.


Oh yes it is. Anything sexual is heavily censored in the west. In particular the US.

Funnily enough, in Europe it's the opposite: news, facts and opinions tend to be censored but porn is wide open (as long as you give your ID card)

>Private entities might have their own policies, but government censorship is fairly small.

It's a distinction without a difference when these "private" entities in the West are the actual power centers. Most regular people spend their waking days at work having to follow the rules of these entities, and these entities provide the basic necessities of life. What would happen if you got banned from all the grocery stores? Put on an unemployable list for having controversial outspoken opinions?


A man was just shot in the street by the US government for filming them, while he happened to be carrying a legally owned gun. https://www.pbs.org/newshour/nation/man-shot-and-killed-by-f...

Earlier they broke down the door of a US citizen and arrested him in his underwear without a warrant. https://www.pbs.org/newshour/nation/a-u-s-citizen-says-ice-f...

Stephen Colbert has been fired for being critical of the president, after pressure from the federal government threatening to stop a merger. https://freespeechproject.georgetown.edu/tracker-entries/ste...

CBS News installed a new editor-in-chief following the above merge and lawsuit related settlement, and she has pulled segments from 60 Minutes which were critical of the administration: https://www.npr.org/2025/12/22/g-s1-103282/cbs-chief-bari-we... (the segment leaked via a foreign affiliate, and later was broadcast by CBS)

Students have been arrested for writing op-eds critical of Israel: https://en.wikipedia.org/wiki/Detention_of_R%C3%BCmeysa_%C3%...

TikTok has been forced to sell to an ally of the current administration, who is now alleged to be censoring information critical of ICE (this last one is as of yet unproven, but the fact is they were forced to sell to someone politically aligned with the president, which doesn't say very good things about freedom of expression): https://www.cosmopolitan.com/politics/a70144099/tiktok-ice-c...

Apple and Google have banned apps tracking ICE from their app stores, upon demand from the government: https://www.npr.org/2025/10/03/nx-s1-5561999/apple-google-ic...

And the government is planning on requiring ESTA visitors to install a mobile app, submit biometric data, and submit 5 years of social media data to travel to the US: https://www.govinfo.gov/content/pkg/FR-2025-12-10/pdf/2025-2...

We no longer have a functioning bill of rights in this country. Have you been asleep for the past year?

The censorship is not as pervasive as in China, yet. But it's getting there fast.


Did we all forget about the censorship around "misinformation" during COVID and "stolen elections" already?

Hard to agree. Not even being to say something because it's either illegal or there are systems to erase it instantly, is very different from people dislike (even too radically) you to say something.

What prompt should I run to detect western censorship from a LLM?


yeah, censorship in the west should give them carte blanche, difficult to blame them, what a fool

It is in fact not difficult to blame them.

Per abstract it's a "a dynamic difference-in-differences" analysis, which means likely that they see whether the employee behavior changes after the event. But establishing causation with it still requires quite a few assumptions.

PNAS is kinda known for headline grabbing research with at times a bit less rigorous methodology.

https://statmodeling.stat.columbia.edu/2017/10/04/breaking-p...

> Certainly they took the time to perform a controlled experiment and assigned managers at random to deliver the birthday cards late or on time. That would be cheap to do and minimally invasive for the human subjects.

If the results are true, it would be actually quite expensive because of the drop in productivity. It could also be a bit of a nightmare to push through ethical review.


They could start by observing the rate at which birthday cards are delivered on time, and not vary too much from that.

I suppose the impact on productivity isn't known in advance, and it might be that failing to receive a birthday card from a normally diligent manager costs the company more in productivity than it gains from a sloppy manager unexpectedly giving one on time.


I've been using a Python prompt or the browser URL bar for simple maths for over a decade. I don't see much added value in doing arithmetic manually, humans really suck at it.


It's easy to miss the value in something you don't do. I do fermi estimates in my head all the time and it would be exhausting to constantly pull out my phone to calculate things, to the point that I would stop attempting it as much as I do.


LLMs are notoriously unreliable at math but even more than that it's about using the appropriate tool for the job. When you Google something, google is smart enough to give you a simple calculator. A simple LLM query like this uses about as much electricity as running a lightbulb for 15 minutes


Humans don't suck at arithmetic.

Anecdata: Most cashiers used to be able to give correct change at checkout very quickly; only a few would type it into the register to have it do the math. Nowadays, with so many people using cards etc., many of them freeze up and struggle with basic change-making.

It's just a matter of keeping in practice and not letting your skills atrophy.


You can’t add 30 minutes in your head?


This is actually a soft skill deficiency of not being able to appreciate importance of other people's fields of expertise. Not unlike a hard skill expert's failure to appreciate the importance of soft skills.

People with truly good soft skills are a pleasure to work with even if your soft skills are not that great.


Option a) don't use those embeds

Option b) ask the consent in the embed.

Analytics can be done without banner requiring tracking, e.g. https://plausible.io/


It's really enraging. Even EU's official sites use the banners, and probably for sites where they wouldn't (or at least shouldn't) even be needed.

It seems that very few, even lawyers, really understand when explicit consent is not needed, and instead we get cargo culting of pointless consent banners everywhere.

The situation has become such that "consents" aren't really meaningful at all, as people just want to get rid of the banner, and it becomes US style contract theatre.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: