Hacker Newsnew | past | comments | ask | show | jobs | submit | ranyume's commentslogin

This might be off-topic but on-topic about child safety... but I'm surprised people are being myopic about age verification. Age verification should be banned, but people ignore that nowadays most widely used online services already ask for your age and act accordingly: twitter, youtube, google in general, any online marketplace. They already got so much data on their users and optimize their algorithms for those groups in an opaque way.

So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.


> people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.

Hogwash.

Where are these mythical people who aren’t concerned with both?


I thought it was common knowledge to just set your birthdate to 1970 or something

> Age verification should be banned

Why?

> They already got so much data on their users

There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.


Age verification obliviates anonymity on the internet. If everything you do, _can_ be tracked by the government, it _will_ be.

Allowing for more effective propaganda, electrol control, and lights a fire on the concept of a government _representing_ anyone.


> Age verification obliviates anonymity on the internet.

How so?

Please explain in detail, because there are already schemes such as "verifiable credentials" which allow people to prove they are of age without handing over ID to online services.


Ok, and? Presenting your ID at a number of IRL estamblishments also heavily reduces anonymity

It's a slippery slope.

This is the next two steps into 1984.

Once you start mandating this, there's no going back.

The next generation will start associating wrongthink with government IDs. (Wait, we already do that, right?)


Read another book.

> It's a slippery slope.

Is it? I thought that was a logical fallacy?

> This is the next two steps into 1984.

How so?

> Once you start mandating this, there's no going back. > The next generation will start associating wrongthink with government IDs.

Could you provide some more details on why you think this? For a start I talked about a scheme in which you don't hand over ID.


Slippery slope can be argumental if you provide the actual argumental reasoning for it as I was thought it could be used as deductive argumentation (though that does not say much). On itself it is a fallacy.

I don't see how verifiable credentials with zero knowledge proofs provide that however.


The Party doesn't care about the Proles, only the members of the Outer Party.

I think that it's rather funny that people like to appeal to 1984 as if the only point of Mr. Orwell was that surveillance is bad, missing the entire point about stuff like the control of the language or the idea that the only self-justification of the (Inner) Party is power for the sake of power (see also: The Theory and Practice of Oligarchical Collectivism).

I'd even go as far as to say that if "telescreens are horrible" is the only thing that someone takes away from 1984, they've frankly missed the point.


Monitoring children's DMs is the responsibility of the parents, not megacorps. If a parent wants to install a keylogger or screen recorder on their child's PC, that's their decision. But Google should not be able to. Neither should... literally anyone else except maybe an employer on a work-provided device.

> Monitoring children's DMs is the responsibility of the parents, not megacorps

Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).


> But what responsibilities do megacorps have? Right now, everyone seems to avoid this question

Clear, simple, direct: Whatever was required of The Bell Telephone Company and nothing more.


So there should be a human operator manually gatekeeping every individual request to connect with another endpoint?

It's a good thing those human operators couldn't listen in to whichever conversation they wanted.


Human operators were not required of The Bell Telephone Company by law. Bell switched to mechanical switching stations as soon as doing so was economically advantageous.

(Reconsider my post. I'm arguing for no regulation.)


I'd say that at minimum social networks need to be required to show how their algorithm works and allow users control over their data. They must be able to know why a content was served to them. Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests, that this is the bare minimum for a free society.

Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.


> social networks need to be required to show how their algorithm works

Hypothetically speaking: What if it's a neural network in which each user has his/her own unique weights which are undergoing frequent retraining?

Would it not be an undue burden to necessitate the release of the weights every time they change?

Also, what value would the weights have? We haven't yet hit the point of having neural networks with interpretability.

Wouldn't enforcing algorithmic interpretability additionally be an undue burden?

> They must be able to know why a content was served to them.

What if the authors of the code are unable to tell you why?


I don’t remember reading about ads in phone calls, nor the complete mapping of customers behaviors to use in contexts not being the phone call.

The apples to oranges in this comparison is probably top five on HN ever.


> But what responsibilities do megacorps have?

fake and scam AD.

they literally profit from those ADs. When the AD distributes malware or make scam, they don't take any responsibility


> But what responsibilities do megacorps have?

They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.


Mega corps should be compelled to and rewarded for allowing parents to monitor their children’s dms.

Parents shouldn't give their child access to a device that allows DMs.

That said, these platforms are making it impossible for parents to monitor anything. They're literally designed to profit off addiction in children.


Why? Plenty of children benefit from talking to other people. Some children need careful monitoring, and some children shouldn't be allowed to use DMs, but it's not universal and should be up to the parents.

What kind of application is not targeted at both teens and adults?

Youtube, twitter, bluesky, whatsapp? Every app with a social aspect will be used by teens. And no, tiktok is not "only for teens" or "specially targeted at teens", nowadays everyone uses it and creates content on it.


Came here to post this.

If you run (say) a restaurant, you get big spikes in business from TikTok videos in ways you don't get from Facebook or Instagram or others.

TikTok is the platform everyone is one right now.


A company that intends to offer a helpful assistant might find that the "assistant character" of an LLM is not adequate for being a helpful assistant.


To support GP‘s point: I have Claude connected to a database and wanted it to drop a table.

Claude is trained to refuse this, despite the scenario being completely safe since I own both parts! I think this is the “LLMs should just do what the user says” perspective.

Of course this breaks down when you have an adversarial relationship between LLM operator and person interacting with it (though arguably there is no safe way to support this scenario due to jailbreak concerns).


It's still not clear if the Assistant character is the best at completing tasks.


There are certain things/llm-phenomena that haven't changed since their introduction.


South America is a big place, and there are a lot of countries. The situation isn't this simple. For example, Argentina is historically the most anti-US country of all South America, yet it's government and their supporters celebrated the US attack calling everyone who opposed it "communists" (all this while the government allows Chinese goods to be massively imported). Argentina's government will be trying to make a block of countries that are us-friendly (and be their leader of course).

Also, adding context, argentine elites are pro-us, but not as much as Brazil's elites and their supporters (who wear the US flag in protests)


How do you know they're not doing anything? Do they have the power to do anything at all beyond virtue signaling?


Personally I'm interested in the prospect of enabling ways to change the learning process of a model based on topological structures.


I like this parallel. No joke intended.


> will often adopt a wrong idea of what's okay

I wonder who gets to decide what's okay.


Basic morality and human rights


How is it moral to claim "Murdering people (in fiction) is fine," then?


From a universal perspective, there is no such thing as "basic morality". Only what the most recent cultural norms of the largest (or strongest) group of people say.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: