Hacker Newsnew | past | comments | ask | show | jobs | submit | kevincox's commentslogin

I actually think this was a good thing. Manipulating images incredibly convincingly was already possible but the cost was high (many hours of highly skilled work). So many people assumed that most images they were seeing were "authentic" without much consideration. By making these fake images ubiquitous we are forcing people to quickly learn that they can't believe what they see on the internet and tracking down sources and deciding who you trust is critically important. People have always said that you can't believe what you see on the internet, but unfortunately many people have managed without major issue ignoring this advice. This wave will force them to take that advice to heart by default.

I remember telling my parents at a young age that I couldn't be sure Ronald Reagan was real, because I'd only ever seen him on TV and never in real life, and I knew things on TV could be fake.

That was the beginning of my journey into understanding what proper verification/vetting of a source is. It's been going on for a long time and there are always new things to learn. This should be taught to every child, starting early on.


I agree. Too many adults are fooled by fake news and propaganda and false contexts. And CNN and Fox are more than happy to take advantage of this.

My personal rule of thumb is if it generates outrage, it's probably fake, or at least a fake interpretation. I know that outrageous stuff actually happens pretty often, so I'll dig into things I find interesting. But most of the time it's all just garbage for clicks.


I used to also have this optimistic take, but over time I think the reality is that most people will instead just distrust unknown online sources and fall into the mental shortcuts of confirmation bias and social proof. Net effect will be even more polarization and groupthink.

> By making these fake images ubiquitous we are forcing people to quickly learn

That's quite the high opinion on the self-improvement ability of your Average Joe. This kind of behavior only comes with an awareness, previously learned, and an alertness of mind. You need the population at large to be able to do this. How if not, say, teaching this at schools and waiting for the next generation to reach adulthood, would you expect this to happen?


I agree that improvement for the Average Joe will be very hard. I also think that taking more attention to teach the younger generation is vitally important. But mostly I don't see an alternative. I don't think we can protect people from fake information without giving up our freedom, and that isn't a viable alternative in my mind. So what is left but trying our hardest to teach people to think critically?

Our institutions have been trying to get our kids to think critically for a while. At least when I was in school, we didn't focus a lot on memorization (sometimes we did, like memorizing the times tables or periodic table). My teachers tried to instill in us an understanding of the concepts, something I took for granted. Many of my classmates have gone on to become lawyers, doctors, other prestigious careers.

But I feel like we live in a different time now. I hear teachers tell stories about school admin siding with parents instead of teachers, and the kids aren't learning anything. Anecdotally of course.

I think our teachers really want the kids to think critically. But parents and schools don't seem to value that anymore.


> By making these fake images ubiquitous we are forcing people to quickly learn that they can't believe what they see on the internet and tracking down sources and deciding who you trust is critically important.

Has this thought process ever worked in real life? I know plenty of seniors who still believe everything that comes out of Facebook, be AI or not, and before that it was the TV, radio, newspapers, etc.

Most people choose to believe, which is why they have a hard time confronting facts.


> I know plenty of seniors

And not just seniors. I see people of all ages who are perfectly happy to accept artificially generated images and video so long as it plays to their existing biases. My impression is that the majority of humanity is not very skeptical by default, and unwilling to learn.


Yes. People willingly accept made up text (stories) if it fits their world view, and for words we always knew that they could be untrue. Why should it be different for images/audio/video?

As they say, people have accepted made up religions for thousands of years.

When it comes to graphic content on the internet I usually consume it's for entertainment purposes. I didn't care where it came from before and don't care today either. Low quality content exists in both categories, a bit easier to spot in AI generated, so it's actually a bonus.

I feel like there is one or two generations of people who are tech savy and not 100% gullible when it comes to online things. Older and younger generations are both completely lost imho, in a blind test you wouldn't discern a monkey from a human scrolling tiktok &co

How so? This "tech savvy and not 100% gullible" generation, gave birth to a political landscape dominated by online ragebait.

Boomers used to tell us to never trust anything online and now they send their life savings to "Brad Pitt"

New generations gets unlimited brain rot delivered through infinite scroll, don't know what a folder is, think everything is "an app" and keep falling for the "technology will free us from work and cure cancer"

There was a sweet spot during which you could grow alongside the internet at a pace that was still manageable and when companies and scammers weren't trying so hard to robbyou from your time money and attention


And if they don't?

Your post seems a little naive to me, a lot of people are just not interested in putting in the work or confronting their own confirmation bias, and there's an oversupply of bad actors who will deliberately generate fake imagery for either deception or exhaustion. Many people are just not on quest for truth and are more interested in the activation potential of images or allegations than in the factual reliability.


In reality: millions of boomers are scrolling FB this very minute reacting to the most obviously fake rage/surprise/love bait AI slop you've ever seen.

They were scrolling through fake bait long before generative AI

but now it is even harder to distinguish

I mostly agree with your points.

> > That users won't be able to install what they want

> No, sideloading will still work, but it won't work if the APK isn't signed by someone in the Google developer registry.

So the user can't install what they want. They can only install stuff signed by developers Google has "approved".

Yes, in the happy situation this is everything except for developers that Google has revoked. But technically it is only approved developers.


That's pedantically fair. I broke up a longer statement:

> That users won't be able to install what they want and that they would need a google account to install apps

It was split up because "need a Google account to install apps" is strictly untrue, but "won't be able to install what they want" is more nuanced.

I did clearly say, "it won't work if the APK isn't signed by someone in the Google developer registry".

So, it depends on what the user wants.

If they're running certified Android; otherwise it doesn't matter.

It is only for registered developers, so of course that very much depends on the registration system.


Yeah, I get you. I think the main misunderstanding from the original comment is that the *user* won't need a Google account, only the *developer* (signer to be technical) will.

It actually is a marginal expense. There are two main reasons.

For music videos there are different licensing terms for listening vs music videos. So if they don't appease the licenser than their contract will be less favourable.

And of course ads will pay less for people who aren't looking (although his is technically lost revenue, not an expense).


> if you just paste in a youtube channel URL as the feed, NNW sorts it out and creates a feed for you.

While I don't doubt that NNW has great UX, feed auto-discovery is a table stakes feature for any RSS client.


From reading a little bit of the code it sounds like Roundcube's sanitizer is much closer to a blacklist than a whitelist. Any attempt to sanitize HTML with a blacklist is doomed to failure. Even if you read the current HTML spec (including referenced specs like SVG) and do a perfect job there are additions over time that you will be vulnerable to.

Probably any unknown element attribute pair should be stripped by default. And that's still not considering different "namespaces" such as SVG and MathML that you need to be careful with.


I am very pro public transit. But there is still a place for cars (ideally mostly taxis). Going to more rural areas or when you need to carry more stuff. I think an ideal society would have both urban transit, inter-city transit and taxis for the other trips and going out into the country.


Driving is always a balance between speed and safety. If you want ultimate safety you just sit in the driveway. But obviously that isn't useful. So functionally one of the most important things a self-driving system will decide is "how fast is it safe to drive right now". Slower is not always better and it has to balance safety with productivity.


I would still hope for it to translate most of the code with a couple of asm blocks. But maybe the density of them was too high and some heuristic decided against it?


It would have been an interesting ending to replace the instructions and see if Reko could be made to output code for the function.


Because unless your TTL is exceptionally long you will almost always have a sufficient supply of new users to balance. Basically you almost never need to move old users to a new target for balancing reasons. The natural churn of users over time is sufficient to deal with that.

Failover is different and more of a concern, especially if the client doesn't respect multiple returned IPs.


You are misunderstanding how HA works with DNS TTLs.

Now there are multiple kinds of HA, so we'll go over a bunch of them here.

Case 1: You have one host (host A) on the internet and it dies, and you have another server somewhere (host B) that's a mirror but with a different IP. When host A dies you update DNS so clients can still connect, but now they connect to host B. In that case the client will not connect to the new IP until their DNS resolver gets the new IP. This was "failover" back in the day. That is dependent on the DNS TTL (and the resolver, because many resolvers and aches ignore the TTL and used their own).

In this case a high TTL is bad, because the user won't be able to connect to your site for TTL seconds + some other amount of time. This is how everyone learned it worked, because this is the way it worked when the inter webs were new.

Case 2: instead of one DNS record with one host you have a DNS record with both hosts. The clients will theoretically choose one host or the other (round robin). In reality it's unclear if that actually do that. Anecdotal evidence shows that it worked until it didn't, usually during a demo to the CEO. But even if it did that means that 50% of your requests will hit a X second timeout as the clients try to connect to a dead host. That's bad, which is why nobody in their right minds did it. And some clients always picked the first host because that's how DNS clients are sometimes.

Putting a load balancer in front of your hosts solves this. Do load balancers die? Yeah, they do. So you need two load balancers...which brings you back to case 1.

These are the basic scenarios that a low DNS TTL fixes. There are other, more complicated solutions, but they're really specialized and require more control of the network infrastructure...which most people don't have.

This isn't an "urban legend" as the author states. These are hard-won lessons from the early days of the internet. You can also not have high availability, which is totally fine.


I'm assuming OP means cloud-based load balancers (listening on public ips). Some providers scale load balancers pretty often depending on traffic which can result in a set of new IPs.


Being specific: AWS load balancers use a 60 second DNS TTL. I think the burden of proof is on TFA to explain why AWS is following an "urban legend" (to use TFA's words). I'm not convinced by what is written here. This seems like a reasonable use case by AWS.


Yes. Statistically the most likely time to change a record is shortly after previously changing it. So it is a good idea to use a low TTL when you change it, then after a stability period raise the TTL as you are less likely to change it in the future.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: